Test Report: Docker_Linux_docker_arm64 17936

                    
                      37a485e4feb148de92f40b101448d251106852cf:2024-02-16:33175
                    
                

Test fail (9/330)

x
+
TestAddons/parallel/Ingress (36.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-105162 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-105162 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-105162 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4a8d6560-259e-404c-a3aa-a1362934d26b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4a8d6560-259e-404c-a3aa-a1362934d26b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003253472s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-105162 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.058151538s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-105162 addons disable ingress --alsologtostderr -v=1: (7.71312123s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-105162
helpers_test.go:235: (dbg) docker inspect addons-105162:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f517cb8c6755712a01fd4677164c90a5f659bda2eb218a715a40fcdec8a518e",
	        "Created": "2024-02-16T16:42:44.60233872Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8783,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T16:42:44.931626168Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/5f517cb8c6755712a01fd4677164c90a5f659bda2eb218a715a40fcdec8a518e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f517cb8c6755712a01fd4677164c90a5f659bda2eb218a715a40fcdec8a518e/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f517cb8c6755712a01fd4677164c90a5f659bda2eb218a715a40fcdec8a518e/hosts",
	        "LogPath": "/var/lib/docker/containers/5f517cb8c6755712a01fd4677164c90a5f659bda2eb218a715a40fcdec8a518e/5f517cb8c6755712a01fd4677164c90a5f659bda2eb218a715a40fcdec8a518e-json.log",
	        "Name": "/addons-105162",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-105162:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-105162",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a7651eb4c9256be1036a02b270f981fdedd0a9d177516d941ef5d28426563017-init/diff:/var/lib/docker/overlay2/946a7b4f2791bd4745aa26fd1fdd5eefb03c154f3c1fd517458d1937bbb85039/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a7651eb4c9256be1036a02b270f981fdedd0a9d177516d941ef5d28426563017/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a7651eb4c9256be1036a02b270f981fdedd0a9d177516d941ef5d28426563017/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a7651eb4c9256be1036a02b270f981fdedd0a9d177516d941ef5d28426563017/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-105162",
	                "Source": "/var/lib/docker/volumes/addons-105162/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-105162",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-105162",
	                "name.minikube.sigs.k8s.io": "addons-105162",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6058430e8f2bf63069e1491ae92d7d4e52af7ee6aca8192b4cb16c0429016b17",
	            "SandboxKey": "/var/run/docker/netns/6058430e8f2b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-105162": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5f517cb8c675",
	                        "addons-105162"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "cfd6176dd5436e825f0f69a6fe491f4b0c2ce7c6c0568522364408724bcf85b4",
	                    "EndpointID": "5e0634489774b13eca778b3840ce2e3de9340cd4a6642303c0f1f0a272675acf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-105162",
	                        "5f517cb8c675"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-105162 -n addons-105162
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-105162 logs -n 25: (1.084134817s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-233338                                                                     | download-only-233338   | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| delete  | -p download-only-806228                                                                     | download-only-806228   | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| delete  | -p download-only-323790                                                                     | download-only-323790   | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| start   | --download-only -p                                                                          | download-docker-797413 | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC |                     |
	|         | download-docker-797413                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-797413                                                                   | download-docker-797413 | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-025721   | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC |                     |
	|         | binary-mirror-025721                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46499                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-025721                                                                     | binary-mirror-025721   | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| addons  | disable dashboard -p                                                                        | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC |                     |
	|         | addons-105162                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC |                     |
	|         | addons-105162                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-105162 --wait=true                                                                | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:44 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=docker                                                                 |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:44 UTC | 16 Feb 24 16:44 UTC |
	|         | -p addons-105162                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-105162 ip                                                                            | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:45 UTC | 16 Feb 24 16:45 UTC |
	| addons  | addons-105162 addons disable                                                                | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:45 UTC | 16 Feb 24 16:45 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:45 UTC | 16 Feb 24 16:45 UTC |
	|         | -p addons-105162                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:45 UTC | 16 Feb 24 16:45 UTC |
	|         | addons-105162                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-105162 ssh cat                                                                       | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:45 UTC | 16 Feb 24 16:45 UTC |
	|         | /opt/local-path-provisioner/pvc-9e738d8f-6f65-49b4-8b38-b2f24abc3b7b_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-105162 addons disable                                                                | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:45 UTC | 16 Feb 24 16:46 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:46 UTC | 16 Feb 24 16:46 UTC |
	|         | addons-105162                                                                               |                        |         |         |                     |                     |
	| addons  | addons-105162 addons                                                                        | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:46 UTC | 16 Feb 24 16:46 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-105162 addons                                                                        | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:46 UTC | 16 Feb 24 16:46 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-105162 addons                                                                        | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:46 UTC | 16 Feb 24 16:46 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-105162 ssh curl -s                                                                   | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:46 UTC | 16 Feb 24 16:46 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-105162 ip                                                                            | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:46 UTC | 16 Feb 24 16:46 UTC |
	| addons  | addons-105162 addons disable                                                                | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:46 UTC | 16 Feb 24 16:46 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-105162 addons disable                                                                | addons-105162          | jenkins | v1.32.0 | 16 Feb 24 16:46 UTC | 16 Feb 24 16:46 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 16:42:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 16:42:21.482925    8317 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:42:21.483206    8317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:42:21.483235    8317 out.go:304] Setting ErrFile to fd 2...
	I0216 16:42:21.483254    8317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:42:21.483545    8317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 16:42:21.484085    8317 out.go:298] Setting JSON to false
	I0216 16:42:21.484904    8317 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1491,"bootTime":1708100250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 16:42:21.484994    8317 start.go:139] virtualization:  
	I0216 16:42:21.504050    8317 out.go:177] * [addons-105162] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 16:42:21.514052    8317 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 16:42:21.525982    8317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:42:21.514146    8317 notify.go:220] Checking for updates...
	I0216 16:42:21.536739    8317 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 16:42:21.547713    8317 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 16:42:21.559570    8317 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0216 16:42:21.571277    8317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 16:42:21.581808    8317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:42:21.610901    8317 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:42:21.611001    8317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:42:21.675386    8317 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-16 16:42:21.66589962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:42:21.675502    8317 docker.go:295] overlay module found
	I0216 16:42:21.694533    8317 out.go:177] * Using the docker driver based on user configuration
	I0216 16:42:21.707126    8317 start.go:299] selected driver: docker
	I0216 16:42:21.707153    8317 start.go:903] validating driver "docker" against <nil>
	I0216 16:42:21.707167    8317 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 16:42:21.707810    8317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:42:21.763278    8317 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-16 16:42:21.754637357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:42:21.763434    8317 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 16:42:21.763649    8317 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 16:42:21.779939    8317 out.go:177] * Using Docker driver with root privileges
	I0216 16:42:21.800288    8317 cni.go:84] Creating CNI manager for ""
	I0216 16:42:21.800325    8317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 16:42:21.800338    8317 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0216 16:42:21.800350    8317 start_flags.go:323] config:
	{Name:addons-105162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-105162 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:42:21.836337    8317 out.go:177] * Starting control plane node addons-105162 in cluster addons-105162
	I0216 16:42:21.859991    8317 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 16:42:21.884665    8317 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 16:42:21.917377    8317 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 16:42:21.917442    8317 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0216 16:42:21.917459    8317 cache.go:56] Caching tarball of preloaded images
	I0216 16:42:21.917546    8317 preload.go:174] Found /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0216 16:42:21.917555    8317 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0216 16:42:21.917636    8317 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 16:42:21.918004    8317 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/config.json ...
	I0216 16:42:21.918039    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/config.json: {Name:mke009b090546cbd0bc083abb007d2fba1f8e718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:21.931761    8317 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 16:42:21.931860    8317 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0216 16:42:21.931882    8317 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0216 16:42:21.931891    8317 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0216 16:42:21.931898    8317 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0216 16:42:21.931906    8317 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf from local cache
	I0216 16:42:37.385038    8317 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf from cached tarball
	I0216 16:42:37.385072    8317 cache.go:194] Successfully downloaded all kic artifacts
	I0216 16:42:37.385128    8317 start.go:365] acquiring machines lock for addons-105162: {Name:mk18d5386e4510771c3aab477a8e932b94588ecf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 16:42:37.385266    8317 start.go:369] acquired machines lock for "addons-105162" in 111.639µs
	I0216 16:42:37.385296    8317 start.go:93] Provisioning new machine with config: &{Name:addons-105162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-105162 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 16:42:37.385377    8317 start.go:125] createHost starting for "" (driver="docker")
	I0216 16:42:37.387829    8317 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0216 16:42:37.388062    8317 start.go:159] libmachine.API.Create for "addons-105162" (driver="docker")
	I0216 16:42:37.388099    8317 client.go:168] LocalClient.Create starting
	I0216 16:42:37.388203    8317 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem
	I0216 16:42:37.518997    8317 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem
	I0216 16:42:38.206380    8317 cli_runner.go:164] Run: docker network inspect addons-105162 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 16:42:38.221328    8317 cli_runner.go:211] docker network inspect addons-105162 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 16:42:38.221414    8317 network_create.go:281] running [docker network inspect addons-105162] to gather additional debugging logs...
	I0216 16:42:38.221437    8317 cli_runner.go:164] Run: docker network inspect addons-105162
	W0216 16:42:38.236956    8317 cli_runner.go:211] docker network inspect addons-105162 returned with exit code 1
	I0216 16:42:38.236988    8317 network_create.go:284] error running [docker network inspect addons-105162]: docker network inspect addons-105162: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-105162 not found
	I0216 16:42:38.237006    8317 network_create.go:286] output of [docker network inspect addons-105162]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-105162 not found
	
	** /stderr **
	I0216 16:42:38.237100    8317 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 16:42:38.252096    8317 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002510970}
	I0216 16:42:38.252138    8317 network_create.go:124] attempt to create docker network addons-105162 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0216 16:42:38.252208    8317 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-105162 addons-105162
	I0216 16:42:38.313754    8317 network_create.go:108] docker network addons-105162 192.168.49.0/24 created
	I0216 16:42:38.313784    8317 kic.go:121] calculated static IP "192.168.49.2" for the "addons-105162" container
	I0216 16:42:38.313856    8317 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 16:42:38.328415    8317 cli_runner.go:164] Run: docker volume create addons-105162 --label name.minikube.sigs.k8s.io=addons-105162 --label created_by.minikube.sigs.k8s.io=true
	I0216 16:42:38.344351    8317 oci.go:103] Successfully created a docker volume addons-105162
	I0216 16:42:38.344444    8317 cli_runner.go:164] Run: docker run --rm --name addons-105162-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-105162 --entrypoint /usr/bin/test -v addons-105162:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 16:42:40.544314    8317 cli_runner.go:217] Completed: docker run --rm --name addons-105162-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-105162 --entrypoint /usr/bin/test -v addons-105162:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (2.199832646s)
	I0216 16:42:40.544351    8317 oci.go:107] Successfully prepared a docker volume addons-105162
	I0216 16:42:40.544376    8317 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 16:42:40.544395    8317 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 16:42:40.544477    8317 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-105162:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 16:42:44.520938    8317 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-105162:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (3.97640589s)
	I0216 16:42:44.520966    8317 kic.go:203] duration metric: took 3.976570 seconds to extract preloaded images to volume
	W0216 16:42:44.521103    8317 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0216 16:42:44.521235    8317 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 16:42:44.588678    8317 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-105162 --name addons-105162 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-105162 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-105162 --network addons-105162 --ip 192.168.49.2 --volume addons-105162:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 16:42:44.941042    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Running}}
	I0216 16:42:44.963267    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:42:44.995421    8317 cli_runner.go:164] Run: docker exec addons-105162 stat /var/lib/dpkg/alternatives/iptables
	I0216 16:42:45.054243    8317 oci.go:144] the created container "addons-105162" has a running status.
	I0216 16:42:45.054277    8317 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa...
	I0216 16:42:45.232171    8317 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 16:42:45.259014    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:42:45.285036    8317 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 16:42:45.285056    8317 kic_runner.go:114] Args: [docker exec --privileged addons-105162 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 16:42:45.347996    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:42:45.374419    8317 machine.go:88] provisioning docker machine ...
	I0216 16:42:45.374448    8317 ubuntu.go:169] provisioning hostname "addons-105162"
	I0216 16:42:45.374520    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:42:45.398298    8317 main.go:141] libmachine: Using SSH client type: native
	I0216 16:42:45.398707    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0216 16:42:45.398720    8317 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-105162 && echo "addons-105162" | sudo tee /etc/hostname
	I0216 16:42:45.399364    8317 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0216 16:42:48.547565    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-105162
	
	I0216 16:42:48.547651    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:42:48.563536    8317 main.go:141] libmachine: Using SSH client type: native
	I0216 16:42:48.563935    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0216 16:42:48.563958    8317 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-105162' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-105162/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-105162' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 16:42:48.700387    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 16:42:48.700416    8317 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-2208/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-2208/.minikube}
	I0216 16:42:48.700438    8317 ubuntu.go:177] setting up certificates
	I0216 16:42:48.700447    8317 provision.go:83] configureAuth start
	I0216 16:42:48.700507    8317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-105162
	I0216 16:42:48.718316    8317 provision.go:138] copyHostCerts
	I0216 16:42:48.718390    8317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem (1078 bytes)
	I0216 16:42:48.718523    8317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem (1123 bytes)
	I0216 16:42:48.718592    8317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem (1675 bytes)
	I0216 16:42:48.718644    8317 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem org=jenkins.addons-105162 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-105162]
	I0216 16:42:49.168700    8317 provision.go:172] copyRemoteCerts
	I0216 16:42:49.168760    8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 16:42:49.168816    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:42:49.184146    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:42:49.281015    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0216 16:42:49.303265    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0216 16:42:49.325774    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 16:42:49.348095    8317 provision.go:86] duration metric: configureAuth took 647.635936ms
	I0216 16:42:49.348119    8317 ubuntu.go:193] setting minikube options for container-runtime
	I0216 16:42:49.348308    8317 config.go:182] Loaded profile config "addons-105162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 16:42:49.348366    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:42:49.365015    8317 main.go:141] libmachine: Using SSH client type: native
	I0216 16:42:49.365409    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0216 16:42:49.365425    8317 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 16:42:49.508835    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 16:42:49.508856    8317 ubuntu.go:71] root file system type: overlay
	I0216 16:42:49.508968    8317 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 16:42:49.509042    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:42:49.524999    8317 main.go:141] libmachine: Using SSH client type: native
	I0216 16:42:49.525412    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0216 16:42:49.525498    8317 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 16:42:49.675781    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 16:42:49.675865    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:42:49.692505    8317 main.go:141] libmachine: Using SSH client type: native
	I0216 16:42:49.692951    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0216 16:42:49.692976    8317 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 16:42:50.426805    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 16:42:49.672305591 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 16:42:50.426909    8317 machine.go:91] provisioned docker machine in 5.052469078s
	I0216 16:42:50.426949    8317 client.go:171] LocalClient.Create took 13.038829198s
	I0216 16:42:50.426985    8317 start.go:167] duration metric: libmachine.API.Create for "addons-105162" took 13.038921351s
	I0216 16:42:50.427007    8317 start.go:300] post-start starting for "addons-105162" (driver="docker")
	I0216 16:42:50.427032    8317 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 16:42:50.427113    8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 16:42:50.427173    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:42:50.444916    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:42:50.541253    8317 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 16:42:50.544026    8317 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 16:42:50.544062    8317 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 16:42:50.544074    8317 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 16:42:50.544081    8317 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 16:42:50.544093    8317 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/addons for local assets ...
	I0216 16:42:50.544154    8317 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/files for local assets ...
	I0216 16:42:50.544178    8317 start.go:303] post-start completed in 117.152489ms
	I0216 16:42:50.544457    8317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-105162
	I0216 16:42:50.559376    8317 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/config.json ...
	I0216 16:42:50.559646    8317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 16:42:50.559694    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:42:50.574653    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:42:50.669215    8317 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 16:42:50.673728    8317 start.go:128] duration metric: createHost completed in 13.288335841s
	I0216 16:42:50.673798    8317 start.go:83] releasing machines lock for "addons-105162", held for 13.288518316s
	I0216 16:42:50.673891    8317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-105162
	I0216 16:42:50.689168    8317 ssh_runner.go:195] Run: cat /version.json
	I0216 16:42:50.689223    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:42:50.689470    8317 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 16:42:50.689520    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:42:50.707966    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:42:50.709325    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:42:50.944998    8317 ssh_runner.go:195] Run: systemctl --version
	I0216 16:42:50.949022    8317 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 16:42:50.952984    8317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 16:42:50.978008    8317 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 16:42:50.978135    8317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0216 16:42:51.020882    8317 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0216 16:42:51.020961    8317 start.go:475] detecting cgroup driver to use...
	I0216 16:42:51.021001    8317 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 16:42:51.021129    8317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 16:42:51.038270    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0216 16:42:51.048429    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 16:42:51.058675    8317 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 16:42:51.058747    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 16:42:51.069180    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 16:42:51.079492    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 16:42:51.090378    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 16:42:51.101229    8317 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 16:42:51.111247    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 16:42:51.121754    8317 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 16:42:51.130450    8317 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 16:42:51.138857    8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 16:42:51.231594    8317 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 16:42:51.333229    8317 start.go:475] detecting cgroup driver to use...
	I0216 16:42:51.333275    8317 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 16:42:51.333343    8317 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 16:42:51.348252    8317 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 16:42:51.348335    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 16:42:51.360807    8317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 16:42:51.377409    8317 ssh_runner.go:195] Run: which cri-dockerd
	I0216 16:42:51.380785    8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 16:42:51.389471    8317 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 16:42:51.410521    8317 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 16:42:51.511869    8317 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 16:42:51.610533    8317 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 16:42:51.610719    8317 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 16:42:51.631070    8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 16:42:51.724887    8317 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 16:42:51.969714    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0216 16:42:51.981577    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 16:42:51.992798    8317 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0216 16:42:52.078561    8317 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0216 16:42:52.161038    8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 16:42:52.256820    8317 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0216 16:42:52.270119    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 16:42:52.281399    8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 16:42:52.367071    8317 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0216 16:42:52.436867    8317 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0216 16:42:52.436954    8317 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0216 16:42:52.443064    8317 start.go:543] Will wait 60s for crictl version
	I0216 16:42:52.443125    8317 ssh_runner.go:195] Run: which crictl
	I0216 16:42:52.446747    8317 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0216 16:42:52.493529    8317 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0216 16:42:52.493614    8317 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 16:42:52.515493    8317 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 16:42:52.544300    8317 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0216 16:42:52.544399    8317 cli_runner.go:164] Run: docker network inspect addons-105162 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 16:42:52.559438    8317 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0216 16:42:52.562849    8317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 16:42:52.573389    8317 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 16:42:52.573462    8317 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 16:42:52.590568    8317 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0216 16:42:52.590592    8317 docker.go:615] Images already preloaded, skipping extraction
	I0216 16:42:52.590655    8317 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 16:42:52.607531    8317 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0216 16:42:52.607559    8317 cache_images.go:84] Images are preloaded, skipping loading
	I0216 16:42:52.607626    8317 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 16:42:52.656971    8317 cni.go:84] Creating CNI manager for ""
	I0216 16:42:52.656998    8317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 16:42:52.657016    8317 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 16:42:52.657034    8317 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-105162 NodeName:addons-105162 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0216 16:42:52.657183    8317 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-105162"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 16:42:52.657255    8317 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-105162 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-105162 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 16:42:52.657337    8317 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0216 16:42:52.666158    8317 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 16:42:52.666225    8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 16:42:52.674137    8317 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0216 16:42:52.690620    8317 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 16:42:52.707641    8317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0216 16:42:52.724975    8317 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0216 16:42:52.728184    8317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 16:42:52.738327    8317 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162 for IP: 192.168.49.2
	I0216 16:42:52.738405    8317 certs.go:190] acquiring lock for shared ca certs: {Name:mkc4dfb4b2b1da0d6a80fb9567025307b764443b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:52.738572    8317 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key
	I0216 16:42:53.264929    8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt ...
	I0216 16:42:53.264960    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt: {Name:mk0a9a0e629f90571977da000ad9a1314e49724f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:53.265164    8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key ...
	I0216 16:42:53.265177    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key: {Name:mk44bf004a4de7fec2e726f50963232c842bda97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:53.265264    8317 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key
	I0216 16:42:53.765913    8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.crt ...
	I0216 16:42:53.765945    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.crt: {Name:mk17e38f606329c8e224ac6648bcce2386ecee78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:53.766118    8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key ...
	I0216 16:42:53.766131    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key: {Name:mk9cf1cc4aaa9f818337da742ecfa2da30e5e364 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:53.766248    8317 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.key
	I0216 16:42:53.766266    8317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt with IP's: []
	I0216 16:42:54.218749    8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt ...
	I0216 16:42:54.218780    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: {Name:mkfe47ac7511d961c40348c2ac46d92c7c95c901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:54.218960    8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.key ...
	I0216 16:42:54.218972    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.key: {Name:mk0600e896790b3d191edd6a9dde2c2a2af760e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:54.219047    8317 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.key.dd3b5fb2
	I0216 16:42:54.219066    8317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 16:42:54.751396    8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.crt.dd3b5fb2 ...
	I0216 16:42:54.751432    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.crt.dd3b5fb2: {Name:mk32fe6a96f96b1e53a3152f507841553be5943f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:54.751678    8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.key.dd3b5fb2 ...
	I0216 16:42:54.751696    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.key.dd3b5fb2: {Name:mkfa99860461a47b35427d02a196adf288709fb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:54.751790    8317 certs.go:337] copying /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.crt
	I0216 16:42:54.751864    8317 certs.go:341] copying /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.key
	I0216 16:42:54.751915    8317 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/proxy-client.key
	I0216 16:42:54.751933    8317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/proxy-client.crt with IP's: []
	I0216 16:42:55.618016    8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/proxy-client.crt ...
	I0216 16:42:55.618045    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/proxy-client.crt: {Name:mkd26cfb937bc6768a41e663243d2d41ff408dae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:55.618240    8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/proxy-client.key ...
	I0216 16:42:55.618251    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/proxy-client.key: {Name:mk7b12108de07d59b1f899f234e29dedfc176ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:42:55.618437    8317 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 16:42:55.618482    8317 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem (1078 bytes)
	I0216 16:42:55.618516    8317 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem (1123 bytes)
	I0216 16:42:55.618554    8317 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem (1675 bytes)
	I0216 16:42:55.619147    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 16:42:55.643588    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 16:42:55.666857    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 16:42:55.689434    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 16:42:55.713695    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 16:42:55.735852    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0216 16:42:55.757980    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 16:42:55.779919    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0216 16:42:55.802193    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 16:42:55.825179    8317 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 16:42:55.841428    8317 ssh_runner.go:195] Run: openssl version
	I0216 16:42:55.846661    8317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 16:42:55.855590    8317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:42:55.858818    8317 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:42:55.858871    8317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:42:55.865651    8317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 16:42:55.874460    8317 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 16:42:55.877566    8317 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 16:42:55.877615    8317 kubeadm.go:404] StartCluster: {Name:addons-105162 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-105162 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:42:55.877747    8317 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 16:42:55.894179    8317 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 16:42:55.902550    8317 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 16:42:55.910480    8317 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 16:42:55.910562    8317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 16:42:55.918570    8317 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 16:42:55.918608    8317 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 16:42:55.962458    8317 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0216 16:42:55.962787    8317 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 16:42:56.015380    8317 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 16:42:56.015458    8317 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 16:42:56.015497    8317 kubeadm.go:322] OS: Linux
	I0216 16:42:56.015545    8317 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 16:42:56.015594    8317 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 16:42:56.015642    8317 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 16:42:56.015692    8317 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 16:42:56.015741    8317 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 16:42:56.015790    8317 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 16:42:56.015837    8317 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0216 16:42:56.015887    8317 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0216 16:42:56.015939    8317 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0216 16:42:56.084044    8317 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 16:42:56.084219    8317 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 16:42:56.084347    8317 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 16:42:56.386540    8317 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 16:42:56.390814    8317 out.go:204]   - Generating certificates and keys ...
	I0216 16:42:56.390911    8317 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 16:42:56.390980    8317 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 16:42:56.767907    8317 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 16:42:57.210413    8317 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 16:42:58.597753    8317 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 16:42:59.139673    8317 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 16:43:00.203239    8317 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 16:43:00.203497    8317 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-105162 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0216 16:43:00.591618    8317 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 16:43:00.591768    8317 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-105162 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0216 16:43:00.859105    8317 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 16:43:01.127295    8317 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 16:43:01.533366    8317 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 16:43:01.533633    8317 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 16:43:01.845068    8317 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 16:43:02.294001    8317 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 16:43:03.082945    8317 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 16:43:03.631593    8317 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 16:43:03.632367    8317 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 16:43:03.635205    8317 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 16:43:03.637782    8317 out.go:204]   - Booting up control plane ...
	I0216 16:43:03.637883    8317 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 16:43:03.637988    8317 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 16:43:03.638531    8317 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 16:43:03.652738    8317 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 16:43:03.653765    8317 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 16:43:03.653973    8317 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 16:43:03.761276    8317 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 16:43:11.264584    8317 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.508160 seconds
	I0216 16:43:11.264723    8317 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0216 16:43:11.277408    8317 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0216 16:43:11.800937    8317 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0216 16:43:11.801125    8317 kubeadm.go:322] [mark-control-plane] Marking the node addons-105162 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0216 16:43:12.313162    8317 kubeadm.go:322] [bootstrap-token] Using token: ddkzr1.h32hlke3z8pik8na
	I0216 16:43:12.315688    8317 out.go:204]   - Configuring RBAC rules ...
	I0216 16:43:12.315816    8317 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0216 16:43:12.322284    8317 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0216 16:43:12.332815    8317 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0216 16:43:12.336869    8317 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0216 16:43:12.340865    8317 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0216 16:43:12.344952    8317 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0216 16:43:12.359063    8317 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0216 16:43:12.585762    8317 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0216 16:43:12.727947    8317 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0216 16:43:12.729413    8317 kubeadm.go:322] 
	I0216 16:43:12.729480    8317 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0216 16:43:12.729487    8317 kubeadm.go:322] 
	I0216 16:43:12.729559    8317 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0216 16:43:12.729564    8317 kubeadm.go:322] 
	I0216 16:43:12.729588    8317 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0216 16:43:12.730006    8317 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0216 16:43:12.730060    8317 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0216 16:43:12.730068    8317 kubeadm.go:322] 
	I0216 16:43:12.730119    8317 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0216 16:43:12.730124    8317 kubeadm.go:322] 
	I0216 16:43:12.730169    8317 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0216 16:43:12.730174    8317 kubeadm.go:322] 
	I0216 16:43:12.730231    8317 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0216 16:43:12.730301    8317 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0216 16:43:12.730366    8317 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0216 16:43:12.730371    8317 kubeadm.go:322] 
	I0216 16:43:12.730649    8317 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0216 16:43:12.730726    8317 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0216 16:43:12.730731    8317 kubeadm.go:322] 
	I0216 16:43:12.731003    8317 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ddkzr1.h32hlke3z8pik8na \
	I0216 16:43:12.731104    8317 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:985c0c270eb19ee200225b2f669d5c43e8649dded41ae1ed84720452ba5310cd \
	I0216 16:43:12.731296    8317 kubeadm.go:322] 	--control-plane 
	I0216 16:43:12.731305    8317 kubeadm.go:322] 
	I0216 16:43:12.731567    8317 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0216 16:43:12.731577    8317 kubeadm.go:322] 
	I0216 16:43:12.731830    8317 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ddkzr1.h32hlke3z8pik8na \
	I0216 16:43:12.732100    8317 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:985c0c270eb19ee200225b2f669d5c43e8649dded41ae1ed84720452ba5310cd 
	I0216 16:43:12.735763    8317 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 16:43:12.735869    8317 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 16:43:12.735888    8317 cni.go:84] Creating CNI manager for ""
	I0216 16:43:12.735911    8317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 16:43:12.738740    8317 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 16:43:12.740851    8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 16:43:12.751931    8317 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 16:43:12.780201    8317 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 16:43:12.780346    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:12.780434    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9 minikube.k8s.io/name=addons-105162 minikube.k8s.io/updated_at=2024_02_16T16_43_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:12.816463    8317 ops.go:34] apiserver oom_adj: -16
	I0216 16:43:13.059327    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:13.560091    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:14.059461    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:14.560329    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:15.059907    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:15.559951    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:16.060154    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:16.559453    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:17.059557    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:17.560082    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:18.060310    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:18.559992    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:19.059966    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:19.560386    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:20.059659    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:20.559443    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:21.060165    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:21.559645    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:22.059486    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:22.559936    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:23.060140    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:23.559971    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:24.060356    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:24.559982    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:25.059968    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:25.559524    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:26.059955    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 16:43:26.149919    8317 kubeadm.go:1088] duration metric: took 13.369615282s to wait for elevateKubeSystemPrivileges.
	I0216 16:43:26.149948    8317 kubeadm.go:406] StartCluster complete in 30.272343159s
	I0216 16:43:26.149964    8317 settings.go:142] acquiring lock: {Name:mkb7d1073df18b92aae32c7933eb8e8868b57c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:43:26.150073    8317 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 16:43:26.150438    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/kubeconfig: {Name:mk22ab392afde309b066ab7073c4430ce25196e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:43:26.150743    8317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 16:43:26.151002    8317 config.go:182] Loaded profile config "addons-105162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 16:43:26.151064    8317 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0216 16:43:26.151133    8317 addons.go:69] Setting yakd=true in profile "addons-105162"
	I0216 16:43:26.151153    8317 addons.go:234] Setting addon yakd=true in "addons-105162"
	I0216 16:43:26.151197    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.151687    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.151981    8317 addons.go:69] Setting metrics-server=true in profile "addons-105162"
	I0216 16:43:26.152031    8317 addons.go:234] Setting addon metrics-server=true in "addons-105162"
	I0216 16:43:26.152067    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.152104    8317 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-105162"
	I0216 16:43:26.152117    8317 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-105162"
	I0216 16:43:26.152148    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.152534    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.153042    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.153361    8317 addons.go:69] Setting registry=true in profile "addons-105162"
	I0216 16:43:26.153383    8317 addons.go:234] Setting addon registry=true in "addons-105162"
	I0216 16:43:26.153419    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.153811    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.154661    8317 addons.go:69] Setting cloud-spanner=true in profile "addons-105162"
	I0216 16:43:26.154686    8317 addons.go:234] Setting addon cloud-spanner=true in "addons-105162"
	I0216 16:43:26.154731    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.155103    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.155545    8317 addons.go:69] Setting storage-provisioner=true in profile "addons-105162"
	I0216 16:43:26.155567    8317 addons.go:234] Setting addon storage-provisioner=true in "addons-105162"
	I0216 16:43:26.155603    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.155967    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.158094    8317 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-105162"
	I0216 16:43:26.158156    8317 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-105162"
	I0216 16:43:26.158197    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.158575    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.162614    8317 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-105162"
	I0216 16:43:26.162646    8317 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-105162"
	I0216 16:43:26.162945    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.167621    8317 addons.go:69] Setting default-storageclass=true in profile "addons-105162"
	I0216 16:43:26.167659    8317 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-105162"
	I0216 16:43:26.168025    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.169293    8317 addons.go:69] Setting volumesnapshots=true in profile "addons-105162"
	I0216 16:43:26.169320    8317 addons.go:234] Setting addon volumesnapshots=true in "addons-105162"
	I0216 16:43:26.169374    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.169781    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.186243    8317 addons.go:69] Setting gcp-auth=true in profile "addons-105162"
	I0216 16:43:26.186282    8317 mustload.go:65] Loading cluster: addons-105162
	I0216 16:43:26.186475    8317 config.go:182] Loaded profile config "addons-105162": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 16:43:26.186720    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.204885    8317 addons.go:69] Setting ingress=true in profile "addons-105162"
	I0216 16:43:26.204935    8317 addons.go:234] Setting addon ingress=true in "addons-105162"
	I0216 16:43:26.205013    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.205516    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.248563    8317 addons.go:69] Setting ingress-dns=true in profile "addons-105162"
	I0216 16:43:26.248600    8317 addons.go:234] Setting addon ingress-dns=true in "addons-105162"
	I0216 16:43:26.248684    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.249132    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.281849    8317 addons.go:69] Setting inspektor-gadget=true in profile "addons-105162"
	I0216 16:43:26.281879    8317 addons.go:234] Setting addon inspektor-gadget=true in "addons-105162"
	I0216 16:43:26.281925    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.284164    8317 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0216 16:43:26.285949    8317 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0216 16:43:26.285976    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0216 16:43:26.286042    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.282355    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.323685    8317 out.go:177]   - Using image docker.io/registry:2.8.3
	I0216 16:43:26.325722    8317 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0216 16:43:26.327775    8317 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0216 16:43:26.327792    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0216 16:43:26.327864    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.330939    8317 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 16:43:26.332797    8317 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 16:43:26.332814    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 16:43:26.332881    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.368720    8317 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0216 16:43:26.370433    8317 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0216 16:43:26.370464    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0216 16:43:26.370535    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.377044    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0216 16:43:26.379055    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0216 16:43:26.381983    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0216 16:43:26.384616    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0216 16:43:26.388280    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0216 16:43:26.391369    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0216 16:43:26.400983    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0216 16:43:26.409623    8317 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-105162"
	I0216 16:43:26.411049    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.411581    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.438509    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0216 16:43:26.465830    8317 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0216 16:43:26.465895    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0216 16:43:26.465972    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.472462    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0216 16:43:26.410988    8317 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0216 16:43:26.410993    8317 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0216 16:43:26.410999    8317 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0216 16:43:26.430564    8317 addons.go:234] Setting addon default-storageclass=true in "addons-105162"
	I0216 16:43:26.474918    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.478911    8317 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0216 16:43:26.478934    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0216 16:43:26.478999    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.475491    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:26.475199    8317 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0216 16:43:26.503326    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.505105    8317 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0216 16:43:26.505155    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0216 16:43:26.505244    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.533199    8317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0216 16:43:26.528682    8317 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	W0216 16:43:26.529605    8317 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-105162" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0216 16:43:26.531646    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:26.531660    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0216 16:43:26.533525    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.540892    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.546865    8317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0216 16:43:26.542505    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	E0216 16:43:26.542519    8317 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0216 16:43:26.542729    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.569229    8317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.6
	I0216 16:43:26.567331    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.567354    8317 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 16:43:26.582240    8317 out.go:177] * Verifying Kubernetes components...
	I0216 16:43:26.572909    8317 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0216 16:43:26.584490    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0216 16:43:26.584559    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.596201    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 16:43:26.611406    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.617002    8317 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0216 16:43:26.619297    8317 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0216 16:43:26.619313    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0216 16:43:26.619378    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.652617    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.699006    8317 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0216 16:43:26.700929    8317 out.go:177]   - Using image docker.io/busybox:stable
	I0216 16:43:26.710564    8317 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0216 16:43:26.710585    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0216 16:43:26.710653    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.732194    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.760768    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.769650    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.770528    8317 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 16:43:26.770542    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 16:43:26.770597    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:26.788470    8317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0216 16:43:26.809397    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.811189    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.828793    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.836589    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:26.837683    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	W0216 16:43:26.847280    8317 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0216 16:43:26.847305    8317 retry.go:31] will retry after 168.52181ms: ssh: handshake failed: EOF
	I0216 16:43:26.862642    8317 node_ready.go:35] waiting up to 6m0s for node "addons-105162" to be "Ready" ...
	I0216 16:43:26.865783    8317 node_ready.go:49] node "addons-105162" has status "Ready":"True"
	I0216 16:43:26.865804    8317 node_ready.go:38] duration metric: took 3.14016ms waiting for node "addons-105162" to be "Ready" ...
	I0216 16:43:26.865814    8317 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 16:43:26.873814    8317 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g6lr7" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:26.987561    8317 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0216 16:43:26.987586    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0216 16:43:27.045360    8317 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0216 16:43:27.045392    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0216 16:43:27.122165    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0216 16:43:27.235918    8317 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0216 16:43:27.235947    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0216 16:43:27.239364    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0216 16:43:27.256237    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0216 16:43:27.270697    8317 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0216 16:43:27.270721    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0216 16:43:27.306904    8317 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0216 16:43:27.306937    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0216 16:43:27.320518    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0216 16:43:27.348626    8317 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0216 16:43:27.348704    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0216 16:43:27.392455    8317 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0216 16:43:27.392487    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0216 16:43:27.395075    8317 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0216 16:43:27.395106    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0216 16:43:27.404670    8317 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0216 16:43:27.404694    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0216 16:43:27.461219    8317 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0216 16:43:27.461244    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0216 16:43:27.494830    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0216 16:43:27.514254    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 16:43:27.552394    8317 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0216 16:43:27.552426    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0216 16:43:27.694133    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0216 16:43:27.721398    8317 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0216 16:43:27.721470    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0216 16:43:27.722662    8317 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 16:43:27.722708    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0216 16:43:27.795107    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 16:43:27.800185    8317 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0216 16:43:27.800245    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0216 16:43:27.884221    8317 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0216 16:43:27.884295    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0216 16:43:27.982376    8317 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0216 16:43:27.982440    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0216 16:43:28.001852    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0216 16:43:28.292871    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 16:43:28.358247    8317 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0216 16:43:28.358351    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0216 16:43:28.494398    8317 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0216 16:43:28.494476    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0216 16:43:28.557390    8317 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0216 16:43:28.557454    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0216 16:43:28.558692    8317 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0216 16:43:28.558749    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0216 16:43:28.661986    8317 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0216 16:43:28.662010    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0216 16:43:28.880150    8317 pod_ready.go:102] pod "coredns-5dd5756b68-g6lr7" in "kube-system" namespace has status "Ready":"False"
	I0216 16:43:28.937281    8317 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0216 16:43:28.937344    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0216 16:43:29.063489    8317 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0216 16:43:29.063568    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0216 16:43:29.095416    8317 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0216 16:43:29.095483    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0216 16:43:29.227020    8317 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0216 16:43:29.227096    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0216 16:43:29.331222    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0216 16:43:29.357527    8317 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0216 16:43:29.357604    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0216 16:43:29.361534    8317 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0216 16:43:29.361616    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0216 16:43:29.797324    8317 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0216 16:43:29.797387    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0216 16:43:29.807345    8317 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0216 16:43:29.807408    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0216 16:43:29.881125    8317 pod_ready.go:92] pod "coredns-5dd5756b68-g6lr7" in "kube-system" namespace has status "Ready":"True"
	I0216 16:43:29.881155    8317 pod_ready.go:81] duration metric: took 3.007269022s waiting for pod "coredns-5dd5756b68-g6lr7" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:29.881169    8317 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q8dhn" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:29.887708    8317 pod_ready.go:92] pod "coredns-5dd5756b68-q8dhn" in "kube-system" namespace has status "Ready":"True"
	I0216 16:43:29.887735    8317 pod_ready.go:81] duration metric: took 6.557279ms waiting for pod "coredns-5dd5756b68-q8dhn" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:29.887758    8317 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-105162" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:29.895777    8317 pod_ready.go:92] pod "etcd-addons-105162" in "kube-system" namespace has status "Ready":"True"
	I0216 16:43:29.895810    8317 pod_ready.go:81] duration metric: took 8.041639ms waiting for pod "etcd-addons-105162" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:29.895822    8317 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-105162" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:29.902544    8317 pod_ready.go:92] pod "kube-apiserver-addons-105162" in "kube-system" namespace has status "Ready":"True"
	I0216 16:43:29.902568    8317 pod_ready.go:81] duration metric: took 6.738802ms waiting for pod "kube-apiserver-addons-105162" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:29.902580    8317 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-105162" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:29.913370    8317 pod_ready.go:92] pod "kube-controller-manager-addons-105162" in "kube-system" namespace has status "Ready":"True"
	I0216 16:43:29.913393    8317 pod_ready.go:81] duration metric: took 10.805321ms waiting for pod "kube-controller-manager-addons-105162" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:29.913405    8317 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dznk7" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:29.998901    8317 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0216 16:43:29.998925    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0216 16:43:30.019269    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0216 16:43:30.247002    8317 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0216 16:43:30.247034    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0216 16:43:30.278085    8317 pod_ready.go:92] pod "kube-proxy-dznk7" in "kube-system" namespace has status "Ready":"True"
	I0216 16:43:30.278110    8317 pod_ready.go:81] duration metric: took 364.698267ms waiting for pod "kube-proxy-dznk7" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:30.278122    8317 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-105162" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:30.295038    8317 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.506533968s)
	I0216 16:43:30.295067    8317 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0216 16:43:30.295120    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.172929393s)
	I0216 16:43:30.440838    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.201427809s)
	I0216 16:43:30.502053    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0216 16:43:30.678217    8317 pod_ready.go:92] pod "kube-scheduler-addons-105162" in "kube-system" namespace has status "Ready":"True"
	I0216 16:43:30.678296    8317 pod_ready.go:81] duration metric: took 400.165304ms waiting for pod "kube-scheduler-addons-105162" in "kube-system" namespace to be "Ready" ...
	I0216 16:43:30.678320    8317 pod_ready.go:38] duration metric: took 3.812492797s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 16:43:30.678361    8317 api_server.go:52] waiting for apiserver process to appear ...
	I0216 16:43:30.678435    8317 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 16:43:31.633197    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.376925614s)
	I0216 16:43:33.145320    8317 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0216 16:43:33.145483    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:33.165656    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:33.197178    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.876612502s)
	I0216 16:43:33.934876    8317 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0216 16:43:34.039546    8317 addons.go:234] Setting addon gcp-auth=true in "addons-105162"
	I0216 16:43:34.039713    8317 host.go:66] Checking if "addons-105162" exists ...
	I0216 16:43:34.041564    8317 cli_runner.go:164] Run: docker container inspect addons-105162 --format={{.State.Status}}
	I0216 16:43:34.064897    8317 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0216 16:43:34.064949    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-105162
	I0216 16:43:34.089645    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/addons-105162/id_rsa Username:docker}
	I0216 16:43:36.343024    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.828741798s)
	I0216 16:43:36.343023    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.848157624s)
	I0216 16:43:36.343123    8317 addons.go:470] Verifying addon ingress=true in "addons-105162"
	I0216 16:43:36.343192    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.050247182s)
	I0216 16:43:36.343213    8317 addons.go:470] Verifying addon metrics-server=true in "addons-105162"
	I0216 16:43:36.348090    8317 out.go:177] * Verifying ingress addon...
	I0216 16:43:36.343130    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.341199043s)
	I0216 16:43:36.343313    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.012015287s)
	I0216 16:43:36.343098    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.547923981s)
	I0216 16:43:36.343358    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.324052575s)
	I0216 16:43:36.343074    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.648907699s)
	I0216 16:43:36.350297    8317 addons.go:470] Verifying addon registry=true in "addons-105162"
	I0216 16:43:36.352820    8317 out.go:177] * Verifying registry addon...
	W0216 16:43:36.350453    8317 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0216 16:43:36.352894    8317 retry.go:31] will retry after 253.232623ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0216 16:43:36.355659    8317 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-105162 service yakd-dashboard -n yakd-dashboard
	
	I0216 16:43:36.351455    8317 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0216 16:43:36.360613    8317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0216 16:43:36.366374    8317 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0216 16:43:36.366394    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:36.373255    8317 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0216 16:43:36.373323    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:36.607030    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0216 16:43:36.861804    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:36.864766    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:37.386878    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:37.387952    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:37.611192    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.109094715s)
	I0216 16:43:37.611265    8317 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-105162"
	I0216 16:43:37.613462    8317 out.go:177] * Verifying csi-hostpath-driver addon...
	I0216 16:43:37.611462    8317 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.932994669s)
	I0216 16:43:37.613561    8317 api_server.go:72] duration metric: took 11.04232558s to wait for apiserver process to appear ...
	I0216 16:43:37.613570    8317 api_server.go:88] waiting for apiserver healthz status ...
	I0216 16:43:37.613602    8317 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0216 16:43:37.611491    8317 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.546575817s)
	I0216 16:43:37.616213    8317 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0216 16:43:37.618948    8317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0216 16:43:37.621196    8317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0216 16:43:37.623613    8317 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0216 16:43:37.623654    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0216 16:43:37.634905    8317 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0216 16:43:37.641210    8317 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0216 16:43:37.641273    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:37.648670    8317 api_server.go:141] control plane version: v1.28.4
	I0216 16:43:37.648742    8317 api_server.go:131] duration metric: took 35.152335ms to wait for apiserver health ...
	I0216 16:43:37.648765    8317 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 16:43:37.716983    8317 system_pods.go:59] 18 kube-system pods found
	I0216 16:43:37.717066    8317 system_pods.go:61] "coredns-5dd5756b68-g6lr7" [1fa4302d-87dc-4799-9199-92d2f84554ea] Running
	I0216 16:43:37.717087    8317 system_pods.go:61] "coredns-5dd5756b68-q8dhn" [f538f9dc-6eae-427d-b0a1-a174a67a970d] Running
	I0216 16:43:37.717129    8317 system_pods.go:61] "csi-hostpath-attacher-0" [fe081b6b-22c6-41d4-868b-576bc33d04ec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0216 16:43:37.717152    8317 system_pods.go:61] "csi-hostpath-resizer-0" [23d944f8-9bb3-42fd-8355-db09ec83d7fa] Pending
	I0216 16:43:37.717173    8317 system_pods.go:61] "csi-hostpathplugin-qgfd8" [23c90673-c4f8-4268-8778-2cd9f07380d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0216 16:43:37.717197    8317 system_pods.go:61] "etcd-addons-105162" [7fdb78e8-42ef-49a1-906e-84b5d924b7af] Running
	I0216 16:43:37.717228    8317 system_pods.go:61] "kube-apiserver-addons-105162" [43c8b4ee-8da0-4a16-84a2-2929bf9d093e] Running
	I0216 16:43:37.717250    8317 system_pods.go:61] "kube-controller-manager-addons-105162" [62d2c3c0-900b-4503-8bde-2c6c3a711f96] Running
	I0216 16:43:37.717272    8317 system_pods.go:61] "kube-ingress-dns-minikube" [b62dd41c-86c0-4b36-b0b3-aa6aa0befa69] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0216 16:43:37.717294    8317 system_pods.go:61] "kube-proxy-dznk7" [15c24ad0-06f3-4698-904a-5cd6d34393f9] Running
	I0216 16:43:37.717314    8317 system_pods.go:61] "kube-scheduler-addons-105162" [79be534b-cc5d-4ab5-97e3-2116a0e706a5] Running
	I0216 16:43:37.717348    8317 system_pods.go:61] "metrics-server-69cf46c98-cfmzq" [64d818de-dd93-47b2-958e-c9ceef37374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 16:43:37.717370    8317 system_pods.go:61] "nvidia-device-plugin-daemonset-b9xb9" [72e6dd2f-3d43-4897-aa2a-ceb463bb124e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0216 16:43:37.717392    8317 system_pods.go:61] "registry-8w4b4" [97a11573-b083-44c5-ae7a-d82fb336cc2d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0216 16:43:37.717423    8317 system_pods.go:61] "registry-proxy-27c7c" [6f79633b-2c8d-4708-9f8b-3ce114882530] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0216 16:43:37.717451    8317 system_pods.go:61] "snapshot-controller-58dbcc7b99-bplx7" [92282b54-5ed3-430d-aa02-82c322db1b6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0216 16:43:37.717474    8317 system_pods.go:61] "snapshot-controller-58dbcc7b99-r9f97" [498b01d6-1b52-4efe-b6fa-52b244752f70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0216 16:43:37.717494    8317 system_pods.go:61] "storage-provisioner" [438751b9-3958-4531-a8c9-d0ddb7cb3a25] Running
	I0216 16:43:37.717523    8317 system_pods.go:74] duration metric: took 68.740622ms to wait for pod list to return data ...
	I0216 16:43:37.717547    8317 default_sa.go:34] waiting for default service account to be created ...
	I0216 16:43:37.722848    8317 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0216 16:43:37.722906    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0216 16:43:37.740978    8317 default_sa.go:45] found service account: "default"
	I0216 16:43:37.741044    8317 default_sa.go:55] duration metric: took 23.477765ms for default service account to be created ...
	I0216 16:43:37.741069    8317 system_pods.go:116] waiting for k8s-apps to be running ...
	I0216 16:43:37.776797    8317 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0216 16:43:37.776858    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0216 16:43:37.789359    8317 system_pods.go:86] 18 kube-system pods found
	I0216 16:43:37.789427    8317 system_pods.go:89] "coredns-5dd5756b68-g6lr7" [1fa4302d-87dc-4799-9199-92d2f84554ea] Running
	I0216 16:43:37.789448    8317 system_pods.go:89] "coredns-5dd5756b68-q8dhn" [f538f9dc-6eae-427d-b0a1-a174a67a970d] Running
	I0216 16:43:37.789472    8317 system_pods.go:89] "csi-hostpath-attacher-0" [fe081b6b-22c6-41d4-868b-576bc33d04ec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0216 16:43:37.789508    8317 system_pods.go:89] "csi-hostpath-resizer-0" [23d944f8-9bb3-42fd-8355-db09ec83d7fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0216 16:43:37.789540    8317 system_pods.go:89] "csi-hostpathplugin-qgfd8" [23c90673-c4f8-4268-8778-2cd9f07380d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0216 16:43:37.789561    8317 system_pods.go:89] "etcd-addons-105162" [7fdb78e8-42ef-49a1-906e-84b5d924b7af] Running
	I0216 16:43:37.789591    8317 system_pods.go:89] "kube-apiserver-addons-105162" [43c8b4ee-8da0-4a16-84a2-2929bf9d093e] Running
	I0216 16:43:37.789620    8317 system_pods.go:89] "kube-controller-manager-addons-105162" [62d2c3c0-900b-4503-8bde-2c6c3a711f96] Running
	I0216 16:43:37.789649    8317 system_pods.go:89] "kube-ingress-dns-minikube" [b62dd41c-86c0-4b36-b0b3-aa6aa0befa69] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0216 16:43:37.789669    8317 system_pods.go:89] "kube-proxy-dznk7" [15c24ad0-06f3-4698-904a-5cd6d34393f9] Running
	I0216 16:43:37.789701    8317 system_pods.go:89] "kube-scheduler-addons-105162" [79be534b-cc5d-4ab5-97e3-2116a0e706a5] Running
	I0216 16:43:37.789729    8317 system_pods.go:89] "metrics-server-69cf46c98-cfmzq" [64d818de-dd93-47b2-958e-c9ceef37374b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 16:43:37.789754    8317 system_pods.go:89] "nvidia-device-plugin-daemonset-b9xb9" [72e6dd2f-3d43-4897-aa2a-ceb463bb124e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0216 16:43:37.789778    8317 system_pods.go:89] "registry-8w4b4" [97a11573-b083-44c5-ae7a-d82fb336cc2d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0216 16:43:37.789809    8317 system_pods.go:89] "registry-proxy-27c7c" [6f79633b-2c8d-4708-9f8b-3ce114882530] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0216 16:43:37.789846    8317 system_pods.go:89] "snapshot-controller-58dbcc7b99-bplx7" [92282b54-5ed3-430d-aa02-82c322db1b6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0216 16:43:37.789867    8317 system_pods.go:89] "snapshot-controller-58dbcc7b99-r9f97" [498b01d6-1b52-4efe-b6fa-52b244752f70] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0216 16:43:37.789887    8317 system_pods.go:89] "storage-provisioner" [438751b9-3958-4531-a8c9-d0ddb7cb3a25] Running
	I0216 16:43:37.789919    8317 system_pods.go:126] duration metric: took 48.830286ms to wait for k8s-apps to be running ...
	I0216 16:43:37.789944    8317 system_svc.go:44] waiting for kubelet service to be running ....
	I0216 16:43:37.790023    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 16:43:37.893434    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0216 16:43:37.917643    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:37.918462    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:38.124704    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:38.363122    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:38.368102    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:38.625298    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:38.869462    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:38.881393    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:38.970505    8317 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.18044436s)
	I0216 16:43:38.970544    8317 system_svc.go:56] duration metric: took 1.180596206s WaitForService to wait for kubelet.
	I0216 16:43:38.970552    8317 kubeadm.go:581] duration metric: took 12.399321268s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0216 16:43:38.970570    8317 node_conditions.go:102] verifying NodePressure condition ...
	I0216 16:43:38.970777    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.363642682s)
	I0216 16:43:38.973431    8317 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0216 16:43:38.973472    8317 node_conditions.go:123] node cpu capacity is 2
	I0216 16:43:38.973483    8317 node_conditions.go:105] duration metric: took 2.907255ms to run NodePressure ...
	I0216 16:43:38.973495    8317 start.go:228] waiting for startup goroutines ...
	I0216 16:43:39.125214    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:39.378276    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:39.392294    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:39.449816    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.556341673s)
	I0216 16:43:39.452792    8317 addons.go:470] Verifying addon gcp-auth=true in "addons-105162"
	I0216 16:43:39.456315    8317 out.go:177] * Verifying gcp-auth addon...
	I0216 16:43:39.459564    8317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0216 16:43:39.463449    8317 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0216 16:43:39.463471    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:39.624716    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:39.862016    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:39.866520    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:39.963302    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:40.125495    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:40.366501    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:40.367743    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:40.463773    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:40.625152    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:40.863739    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:40.867958    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:40.963438    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:41.124029    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:41.362764    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:41.366837    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:41.463942    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:41.625349    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:41.863737    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:41.870713    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:41.963765    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:42.127515    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:42.363496    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:42.367062    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:42.464762    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:42.625444    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:42.862826    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:42.867208    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:42.963490    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:43.125354    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:43.366641    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:43.367226    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:43.463382    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:43.627329    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:43.867594    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:43.869706    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:43.963609    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:44.124779    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:44.364813    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:44.372301    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:44.465498    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:44.625350    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:44.863751    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:44.869807    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:44.963624    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:45.127398    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:45.367725    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:45.368470    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:45.463034    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:45.625233    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:45.863982    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:45.865773    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:45.963670    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:46.124736    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:46.362197    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:46.365564    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:46.462930    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:46.624971    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:46.862426    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:46.865286    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:46.963961    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:47.124797    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:47.362756    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:47.366871    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:47.463610    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:47.625753    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:47.861934    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:47.866358    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:47.963852    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:48.125155    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:48.366344    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:48.367286    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:48.463818    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:48.633202    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:48.862686    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:48.866652    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:48.963570    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:49.125047    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:49.362684    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:49.366656    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:49.463539    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:49.624530    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:49.864265    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:49.866655    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:49.963745    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:50.125863    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:50.362847    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:50.367212    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:50.464671    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:50.625180    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:50.863365    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:50.866334    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:50.963633    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:51.125810    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:51.362796    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:51.369929    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:51.463212    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:51.624862    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:51.865657    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:51.868030    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:51.963792    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:52.124985    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:52.362553    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:52.366812    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:52.463448    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:52.625161    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:52.863395    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:52.867519    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:52.963007    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:53.124903    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:53.362539    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:53.365592    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:53.463413    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:53.624674    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:53.862867    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:53.872140    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:53.964073    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:54.126405    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:54.365976    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:54.368675    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:54.463749    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:54.626432    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:54.863725    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:54.870700    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:54.965379    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:55.125279    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:55.370728    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:55.373443    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:55.463965    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:55.634525    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:55.863512    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:55.866678    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:55.964220    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:56.129117    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:56.365595    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:56.366467    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:56.463527    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:56.624231    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:56.863195    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:56.866039    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:56.963594    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:57.125425    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:57.363314    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:57.367791    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:57.463608    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:57.625735    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:57.862026    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:57.866274    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:57.963773    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:58.125193    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:58.363985    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:58.368221    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0216 16:43:58.463229    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:58.630193    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:58.863050    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:58.867069    8317 kapi.go:107] duration metric: took 22.506456811s to wait for kubernetes.io/minikube-addons=registry ...
	I0216 16:43:58.964182    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:59.124987    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:59.363350    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:59.464100    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:43:59.624783    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:43:59.861996    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:43:59.963723    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:00.129620    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:00.362872    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:00.463683    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:00.625585    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:00.866483    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:00.965738    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:01.126317    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:01.363650    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:01.463543    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:01.625206    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:01.862399    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:01.967344    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:02.124734    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:02.363375    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:02.464366    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:02.624722    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:02.862967    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:02.963249    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:03.126082    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:03.362457    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:03.467366    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:03.625751    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:03.862422    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:03.964315    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:04.126588    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:04.364068    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:04.464617    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:04.625444    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:04.863517    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:04.963288    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:05.125591    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:05.362527    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:05.464135    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:05.625886    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:05.861947    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:05.964864    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:06.124931    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:06.363509    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:06.464458    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:06.626674    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:06.861973    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:06.963814    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:07.125718    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:07.362584    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:07.463290    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:07.631888    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:07.863046    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:07.963737    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:08.125108    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:08.362177    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:08.464113    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:08.625178    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:08.867437    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:08.963694    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:09.124809    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:09.362029    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:09.463666    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:09.626313    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:09.863062    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:09.963725    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:10.125170    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:10.362127    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:10.463739    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:10.625273    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:10.863310    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:10.964351    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:11.126404    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:11.362918    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:11.463899    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:11.659723    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:11.862260    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:11.964517    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:12.125728    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:12.362574    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:12.463402    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:12.624896    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:12.862279    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:12.964113    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:13.124551    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:13.362652    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:13.464610    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:13.625269    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:13.862954    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:13.963828    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:14.125653    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:14.362902    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:14.464838    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:14.629013    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:14.862608    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:14.963803    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:15.126107    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:15.362093    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:15.464751    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:15.629425    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:15.863458    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:15.971052    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:16.129615    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:16.362670    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:16.465347    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:16.626124    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:16.862998    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:16.963328    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:17.124957    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:17.361891    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:17.463963    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:17.624535    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:17.861877    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:17.963479    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:18.125051    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:18.362737    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:18.463306    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:18.625016    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:18.862508    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:18.962808    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:19.125056    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:19.364313    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:19.466009    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:19.630917    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:19.862556    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:19.963458    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:20.125239    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:20.362975    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:20.465996    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:20.624575    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:20.862612    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:20.963278    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:21.125293    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:21.363199    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:21.463698    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:21.624228    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:21.862429    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:21.962986    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:22.130798    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:22.362757    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:22.463140    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:22.625081    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:22.861995    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:22.963648    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:23.127858    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:23.361861    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:23.463267    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:23.624531    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0216 16:44:23.862164    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:23.963539    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:24.125078    8317 kapi.go:107] duration metric: took 46.506123512s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0216 16:44:24.361999    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:24.463700    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:24.862515    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:24.963865    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:25.362289    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:25.463767    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:25.862651    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:25.963047    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:26.362360    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:26.463830    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:26.862506    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:26.962783    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:27.362009    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:27.463836    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:27.861936    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:27.963390    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:28.362699    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:28.463155    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:28.862654    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:28.963318    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:29.362019    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:29.463380    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:29.861991    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:29.963503    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:30.362091    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:30.463391    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:30.862296    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:30.963749    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:31.362070    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:31.463594    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:31.862332    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:31.963872    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:32.362176    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:32.463866    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:32.861793    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:32.963201    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:33.363047    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:33.463785    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:33.862348    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:33.964482    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:34.362191    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:34.463899    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:34.867337    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:34.964393    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:35.361828    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:35.463322    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:35.862786    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:35.963088    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:36.362318    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:36.463902    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:36.862637    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:36.963224    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:37.362836    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:37.463429    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:37.862066    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:37.963501    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:38.361839    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:38.463305    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:38.862595    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:38.963019    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:39.362557    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:39.462869    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:39.864154    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:39.963600    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:40.361809    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:40.463707    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:40.862382    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:40.963934    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:41.361873    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:41.463624    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:41.861589    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:41.962876    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:42.362515    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:42.465050    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:42.862479    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:42.963032    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:43.362765    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:43.463481    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:43.862590    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:43.963027    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:44.362214    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:44.479447    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:44.862721    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:44.963335    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:45.363360    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:45.464066    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:45.864525    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:45.964072    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:46.366474    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:46.462950    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:46.862993    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:46.963764    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:47.364480    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0216 16:44:47.466476    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:47.862526    8317 kapi.go:107] duration metric: took 1m11.511066793s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0216 16:44:47.963024    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:48.463286    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:48.963187    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:49.464713    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0216 16:44:49.963236    8317 kapi.go:107] duration metric: took 1m10.503670627s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0216 16:44:49.965010    8317 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-105162 cluster.
	I0216 16:44:49.967097    8317 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0216 16:44:49.969514    8317 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0216 16:44:49.971475    8317 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, storage-provisioner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0216 16:44:49.973383    8317 addons.go:505] enable addons completed in 1m23.822322183s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher storage-provisioner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0216 16:44:49.973423    8317 start.go:233] waiting for cluster config update ...
	I0216 16:44:49.973454    8317 start.go:242] writing updated cluster config ...
	I0216 16:44:49.974293    8317 ssh_runner.go:195] Run: rm -f paused
	I0216 16:44:50.302307    8317 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0216 16:44:50.304481    8317 out.go:177] * Done! kubectl is now configured to use "addons-105162" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 16 16:46:11 addons-105162 dockerd[1126]: time="2024-02-16T16:46:11.132297281Z" level=info msg="ignoring event" container=08c2a21396e4fad62abf0758225c7b46a71801b2bf4d9b70f2313b31f96ea8ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:11 addons-105162 dockerd[1126]: time="2024-02-16T16:46:11.137886985Z" level=info msg="ignoring event" container=974b235c576b7ad73cdd870ed824471820b70d1400da6c472518acb73ddb1694 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:11 addons-105162 dockerd[1126]: time="2024-02-16T16:46:11.145713695Z" level=info msg="ignoring event" container=0d7a39997f099df68708bbd9d2f9465fb5889842b386446fb8d08cd643761063 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:11 addons-105162 dockerd[1126]: time="2024-02-16T16:46:11.189686036Z" level=info msg="ignoring event" container=036100a8a819736507573230c07232b0f7c2d7617d1e9a038a74204ecf372df0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:11 addons-105162 dockerd[1126]: time="2024-02-16T16:46:11.280801177Z" level=info msg="ignoring event" container=489af0549c820d819d8cb59974bb4f7543279a0e117859e4843490db4df42b16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:11 addons-105162 dockerd[1126]: time="2024-02-16T16:46:11.354013873Z" level=info msg="ignoring event" container=151b2cf5c272b1c6f201f12111a490cdf433c1c2be5c737a8ce07b76126083be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:11 addons-105162 dockerd[1126]: time="2024-02-16T16:46:11.383617873Z" level=info msg="ignoring event" container=4a3c8a71359f154258042ae3b9792c3510d2ff915d9945934e1522f8e23b36c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:17 addons-105162 dockerd[1126]: time="2024-02-16T16:46:17.643049550Z" level=info msg="ignoring event" container=f633cfcd24bac3ca6bbe7470c749bb48c8025ff0aaf1d2034ebda2f4678619cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:17 addons-105162 dockerd[1126]: time="2024-02-16T16:46:17.647447483Z" level=info msg="ignoring event" container=50ee5837cd233228db06e5390f10d196849b05ece08f7d0e38ada16c52c1f436 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:17 addons-105162 dockerd[1126]: time="2024-02-16T16:46:17.810763528Z" level=info msg="ignoring event" container=6e512a8520b7d69f0f25cb62e5636394ad8f5bd2985a165d86a2649ccfe2e1be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:17 addons-105162 dockerd[1126]: time="2024-02-16T16:46:17.849091279Z" level=info msg="ignoring event" container=a46537c7b844d3cca6cde33210af00f4e74e6a45a7e8440301e6098800fb8569 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:18 addons-105162 cri-dockerd[1336]: time="2024-02-16T16:46:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09fdffd13e6d92106d08c39ea68cb605fdc1017d091ff5e1a01a511d8ecf7626/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Feb 16 16:46:20 addons-105162 dockerd[1126]: time="2024-02-16T16:46:20.609303921Z" level=info msg="ignoring event" container=a3c7d9ed829247ae6a320b55a147f5d70da778dabc58b6f01f72e3d0dc06c5e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:20 addons-105162 cri-dockerd[1336]: time="2024-02-16T16:46:20Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Feb 16 16:46:20 addons-105162 dockerd[1126]: time="2024-02-16T16:46:20.824568071Z" level=info msg="ignoring event" container=c213423bbd1c653353c8aac9d68a476c97267197980ef3d1e203ac9cf25f000e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:28 addons-105162 cri-dockerd[1336]: time="2024-02-16T16:46:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3025fce01bb9000e91bdf9b362dfb4571458fd05fd5fd080c46690e6999ade8e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Feb 16 16:46:30 addons-105162 cri-dockerd[1336]: time="2024-02-16T16:46:30Z" level=info msg="Stop pulling image gcr.io/google-samples/hello-app:1.0: Status: Downloaded newer image for gcr.io/google-samples/hello-app:1.0"
	Feb 16 16:46:30 addons-105162 dockerd[1126]: time="2024-02-16T16:46:30.745566975Z" level=info msg="ignoring event" container=2f14f7a75104a3c638b4da1eab991a4e66adca3af6cea008d8741fe25e628333 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:31 addons-105162 dockerd[1126]: time="2024-02-16T16:46:31.394765614Z" level=info msg="ignoring event" container=60273f6b6ae967d39e165616a4abcd6b9415768f3a6260fc5cd0a13d44c0c7ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:38 addons-105162 dockerd[1126]: time="2024-02-16T16:46:38.838444789Z" level=info msg="ignoring event" container=a7734de761225331432455962038ec50a04e60f9eacc44d97a57fc6b9baeb4d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:44 addons-105162 dockerd[1126]: time="2024-02-16T16:46:44.039438701Z" level=info msg="ignoring event" container=8babf60c443b9871d3fb3a5a6faf3e7c8bad673c9fa181a628082eb7e0d72df7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:45 addons-105162 dockerd[1126]: time="2024-02-16T16:46:45.833290360Z" level=info msg="ignoring event" container=b97800f54e3f1279e343379ee26e155f722642149af88973ddb55e0daea3989e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:48 addons-105162 dockerd[1126]: time="2024-02-16T16:46:48.126961052Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=cf86160c231351074367a78a2e6930f3196d4594e004de457beb5257a3a96010
	Feb 16 16:46:48 addons-105162 dockerd[1126]: time="2024-02-16T16:46:48.193822607Z" level=info msg="ignoring event" container=cf86160c231351074367a78a2e6930f3196d4594e004de457beb5257a3a96010 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 16:46:48 addons-105162 dockerd[1126]: time="2024-02-16T16:46:48.319451599Z" level=info msg="ignoring event" container=406593c5d0162553f811b335d853bd0761f5bf4a69402bd27b095bc9ae803f0d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b97800f54e3f1       dd1b12fcb6097                                                                                                                8 seconds ago        Exited              hello-world-app           2                   3025fce01bb90       hello-world-app-5d77478584-rbzrd
	64f0fd23d1a7c       nginx@sha256:cedce0b6e276efe62bbf15345053f44cdc5d1c834a63ab7619aa8355093f85d2                                                33 seconds ago       Running             nginx                     0                   09fdffd13e6d9       nginx
	d29a9d10f0a95       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        About a minute ago   Running             headlamp                  0                   1df19870e7025       headlamp-7ddfbb94ff-qgck5
	f4abefab9f38c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 2 minutes ago        Running             gcp-auth                  0                   d14ae638b8b7d       gcp-auth-d4c87556c-jkt6g
	dff4550cbbff5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084   2 minutes ago        Exited              patch                     0                   9ada2b93f1994       ingress-nginx-admission-patch-mkkgw
	e44ec286761c3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084   2 minutes ago        Exited              create                    0                   4a7e73e5d479b       ingress-nginx-admission-create-qk5wf
	ddc3f5ed2e456       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        2 minutes ago        Running             yakd                      0                   9e160361ee882       yakd-dashboard-9947fc6bf-gfmnx
	227236451ea50       ba04bb24b9575                                                                                                                3 minutes ago        Running             storage-provisioner       0                   305f5b945cf1e       storage-provisioner
	6a664d1c63770       97e04611ad434                                                                                                                3 minutes ago        Running             coredns                   0                   185201a1f8b97       coredns-5dd5756b68-g6lr7
	57b2e1f2cb65a       97e04611ad434                                                                                                                3 minutes ago        Running             coredns                   0                   665e2eb9a22d7       coredns-5dd5756b68-q8dhn
	4d9f1198e1d6c       3ca3ca488cf13                                                                                                                3 minutes ago        Running             kube-proxy                0                   1be6d94e30a30       kube-proxy-dznk7
	7175d721a6e1b       9961cbceaf234                                                                                                                3 minutes ago        Running             kube-controller-manager   0                   3cec49d3e690e       kube-controller-manager-addons-105162
	0ad7c340b7f33       05c284c929889                                                                                                                3 minutes ago        Running             kube-scheduler            0                   c4bb26cbae813       kube-scheduler-addons-105162
	5012d73b86b53       04b4c447bb9d4                                                                                                                3 minutes ago        Running             kube-apiserver            0                   ff73cc10782f6       kube-apiserver-addons-105162
	eee39bfd2a5e5       9cdd6470f48c8                                                                                                                3 minutes ago        Running             etcd                      0                   4facaba957e9c       etcd-addons-105162
	
	
	==> coredns [57b2e1f2cb65] <==
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:41416 - 62074 "HINFO IN 2855861165925763818.953127427498345236. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012965074s
	[INFO] 10.244.0.7:57423 - 51569 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000267129s
	[INFO] 10.244.0.7:57423 - 12655 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155054s
	[INFO] 10.244.0.7:43968 - 21626 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000118089s
	[INFO] 10.244.0.7:43968 - 8295 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000235858s
	[INFO] 10.244.0.7:59008 - 25393 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102015s
	[INFO] 10.244.0.7:59008 - 43055 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081748s
	[INFO] 10.244.0.7:48944 - 7643 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121839s
	[INFO] 10.244.0.7:48944 - 46557 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000229761s
	[INFO] 10.244.0.7:55971 - 51376 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001247354s
	[INFO] 10.244.0.21:49266 - 52150 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000229811s
	[INFO] 10.244.0.21:36760 - 39805 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143878s
	[INFO] 10.244.0.21:41580 - 55074 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.000731403s
	[INFO] 10.244.0.23:54447 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000224748s
	[INFO] 10.244.0.20:51911 - 45720 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000232478s
	[INFO] 10.244.0.20:51911 - 60992 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000161216s
	[INFO] 10.244.0.20:51911 - 16182 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000135024s
	[INFO] 10.244.0.20:51911 - 14750 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000101244s
	[INFO] 10.244.0.20:51911 - 23362 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000123111s
	[INFO] 10.244.0.20:51911 - 61414 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00009943s
	[INFO] 10.244.0.20:51911 - 24463 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001457529s
	[INFO] 10.244.0.20:51911 - 56913 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001029071s
	[INFO] 10.244.0.20:51911 - 32262 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000146783s
	
	
	==> coredns [6a664d1c6377] <==
	[INFO] 10.244.0.20:59208 - 44840 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000106962s
	[INFO] 10.244.0.20:59208 - 55374 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00017939s
	[INFO] 10.244.0.20:59208 - 14688 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000125999s
	[INFO] 10.244.0.20:59208 - 43219 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000191123s
	[INFO] 10.244.0.20:59208 - 20714 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002254885s
	[INFO] 10.244.0.20:59208 - 32478 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001652418s
	[INFO] 10.244.0.20:59208 - 62097 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000200363s
	[INFO] 10.244.0.20:45264 - 65463 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00111813s
	[INFO] 10.244.0.20:45264 - 15641 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00012485s
	[INFO] 10.244.0.20:45264 - 59063 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000123594s
	[INFO] 10.244.0.20:45264 - 64800 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000100325s
	[INFO] 10.244.0.20:45264 - 42222 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000088304s
	[INFO] 10.244.0.20:45264 - 59673 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058347s
	[INFO] 10.244.0.20:45264 - 27550 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001787042s
	[INFO] 10.244.0.20:45264 - 16512 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001034888s
	[INFO] 10.244.0.20:45264 - 21019 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000137207s
	[INFO] 10.244.0.20:38550 - 22691 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105486s
	[INFO] 10.244.0.20:38550 - 51935 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069884s
	[INFO] 10.244.0.20:38550 - 54439 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000113707s
	[INFO] 10.244.0.20:38550 - 53763 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005541s
	[INFO] 10.244.0.20:38550 - 62261 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000093745s
	[INFO] 10.244.0.20:38550 - 47090 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000534s
	[INFO] 10.244.0.20:38550 - 45160 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001150523s
	[INFO] 10.244.0.20:38550 - 57597 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001185339s
	[INFO] 10.244.0.20:38550 - 14895 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078868s
	
	
	==> describe nodes <==
	Name:               addons-105162
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-105162
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9
	                    minikube.k8s.io/name=addons-105162
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_16T16_43_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-105162
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Feb 2024 16:43:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-105162
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Feb 2024 16:46:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Feb 2024 16:46:48 +0000   Fri, 16 Feb 2024 16:43:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Feb 2024 16:46:48 +0000   Fri, 16 Feb 2024 16:43:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Feb 2024 16:46:48 +0000   Fri, 16 Feb 2024 16:43:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Feb 2024 16:46:48 +0000   Fri, 16 Feb 2024 16:43:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-105162
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 5499c03149aa4320a0073eda64cc4457
	  System UUID:                bdd4b972-e889-4e5b-a3e6-9420949b1891
	  Boot ID:                    28e061af-c4a8-40ad-8619-080c07806076
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-rbzrd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-d4c87556c-jkt6g                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  headlamp                    headlamp-7ddfbb94ff-qgck5                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 coredns-5dd5756b68-g6lr7                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m27s
	  kube-system                 coredns-5dd5756b68-q8dhn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m27s
	  kube-system                 etcd-addons-105162                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-apiserver-addons-105162             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-controller-manager-addons-105162    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-proxy-dznk7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-scheduler-addons-105162             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-gfmnx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             368Mi (4%!)(MISSING)  596Mi (7%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m25s                  kube-proxy       
	  Normal  Starting                 3m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s (x8 over 3m48s)  kubelet          Node addons-105162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x8 over 3m48s)  kubelet          Node addons-105162 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x7 over 3m48s)  kubelet          Node addons-105162 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m41s                  kubelet          Node addons-105162 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m41s                  kubelet          Node addons-105162 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m41s                  kubelet          Node addons-105162 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m41s                  kubelet          Node addons-105162 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m30s                  kubelet          Node addons-105162 status is now: NodeReady
	  Normal  RegisteredNode           3m28s                  node-controller  Node addons-105162 event: Registered Node addons-105162 in Controller
	
	
	==> dmesg <==
	[Feb16 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015128] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.135828] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.504151] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [eee39bfd2a5e] <==
	{"level":"info","ts":"2024-02-16T16:43:05.997472Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-16T16:43:05.997499Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-16T16:43:05.997507Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-16T16:43:05.997922Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-16T16:43:05.997938Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-16T16:43:06.001117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-02-16T16:43:06.00122Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-02-16T16:43:06.2407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-16T16:43:06.240906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-16T16:43:06.241006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-02-16T16:43:06.241115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-02-16T16:43:06.2412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-16T16:43:06.241283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-02-16T16:43:06.241381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-16T16:43:06.244753Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T16:43:06.248821Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-105162 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-16T16:43:06.250923Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T16:43:06.251103Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T16:43:06.251199Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T16:43:06.251297Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T16:43:06.252313Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-16T16:43:06.252676Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-16T16:43:06.252708Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-16T16:43:06.252683Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T16:43:06.257657Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [f4abefab9f38] <==
	2024/02/16 16:44:48 GCP Auth Webhook started!
	2024/02/16 16:44:51 Ready to marshal response ...
	2024/02/16 16:44:51 Ready to write response ...
	2024/02/16 16:44:51 Ready to marshal response ...
	2024/02/16 16:44:51 Ready to write response ...
	2024/02/16 16:44:51 Ready to marshal response ...
	2024/02/16 16:44:51 Ready to write response ...
	2024/02/16 16:45:00 Ready to marshal response ...
	2024/02/16 16:45:00 Ready to write response ...
	2024/02/16 16:45:10 Ready to marshal response ...
	2024/02/16 16:45:10 Ready to write response ...
	2024/02/16 16:45:10 Ready to marshal response ...
	2024/02/16 16:45:10 Ready to write response ...
	2024/02/16 16:45:19 Ready to marshal response ...
	2024/02/16 16:45:19 Ready to write response ...
	2024/02/16 16:45:26 Ready to marshal response ...
	2024/02/16 16:45:26 Ready to write response ...
	2024/02/16 16:46:00 Ready to marshal response ...
	2024/02/16 16:46:00 Ready to write response ...
	2024/02/16 16:46:18 Ready to marshal response ...
	2024/02/16 16:46:18 Ready to write response ...
	2024/02/16 16:46:27 Ready to marshal response ...
	2024/02/16 16:46:27 Ready to write response ...
	
	
	==> kernel <==
	 16:46:53 up 29 min,  0 users,  load average: 0.86, 1.20, 0.59
	Linux addons-105162 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [5012d73b86b5] <==
	I0216 16:46:08.632303       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0216 16:46:09.533971       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0216 16:46:09.648420       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0216 16:46:17.397098       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0216 16:46:17.397145       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0216 16:46:17.404597       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0216 16:46:17.405785       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0216 16:46:17.423156       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0216 16:46:17.423195       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0216 16:46:17.439148       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0216 16:46:17.439787       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0216 16:46:17.456075       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0216 16:46:17.456131       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0216 16:46:17.477951       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0216 16:46:17.478008       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0216 16:46:17.479496       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0216 16:46:17.479739       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0216 16:46:17.490967       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0216 16:46:17.491193       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0216 16:46:18.105437       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	W0216 16:46:18.423831       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0216 16:46:18.451802       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.84.190"}
	W0216 16:46:18.480449       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0216 16:46:18.509965       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0216 16:46:28.261145       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.72.3"}
	
	
	==> kube-controller-manager [7175d721a6e1] <==
	I0216 16:46:27.950450       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="33.070857ms"
	I0216 16:46:27.967011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.51211ms"
	I0216 16:46:27.967793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.336µs"
	I0216 16:46:27.968088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="28.529µs"
	W0216 16:46:28.756036       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0216 16:46:28.756070       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0216 16:46:31.270566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="59.43µs"
	W0216 16:46:32.270378       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0216 16:46:32.270424       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0216 16:46:32.294466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="75.118µs"
	I0216 16:46:33.315489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="46.015µs"
	W0216 16:46:34.783353       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0216 16:46:34.783384       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0216 16:46:36.172743       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0216 16:46:36.172791       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0216 16:46:36.922150       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0216 16:46:36.922190       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0216 16:46:45.075026       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0216 16:46:45.080329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="5.325µs"
	I0216 16:46:45.087981       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0216 16:46:46.495148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.965µs"
	W0216 16:46:49.615442       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0216 16:46:49.615477       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0216 16:46:51.137669       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0216 16:46:51.137705       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [4d9f1198e1d6] <==
	I0216 16:43:27.482582       1 server_others.go:69] "Using iptables proxy"
	I0216 16:43:27.638423       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0216 16:43:28.260289       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0216 16:43:28.263000       1 server_others.go:152] "Using iptables Proxier"
	I0216 16:43:28.263040       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0216 16:43:28.263055       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0216 16:43:28.263087       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0216 16:43:28.263322       1 server.go:846] "Version info" version="v1.28.4"
	I0216 16:43:28.263340       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 16:43:28.264549       1 config.go:188] "Starting service config controller"
	I0216 16:43:28.264565       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0216 16:43:28.264585       1 config.go:97] "Starting endpoint slice config controller"
	I0216 16:43:28.264589       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0216 16:43:28.265128       1 config.go:315] "Starting node config controller"
	I0216 16:43:28.265141       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0216 16:43:28.365035       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0216 16:43:28.365098       1 shared_informer.go:318] Caches are synced for service config
	I0216 16:43:28.365351       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0ad7c340b7f3] <==
	W0216 16:43:10.555543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0216 16:43:10.556151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0216 16:43:10.555600       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0216 16:43:10.556462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0216 16:43:10.555638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0216 16:43:10.556717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0216 16:43:10.555764       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0216 16:43:10.556896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0216 16:43:10.555811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0216 16:43:10.557096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0216 16:43:10.555859       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0216 16:43:10.557284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0216 16:43:10.555906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0216 16:43:10.557455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0216 16:43:10.555952       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0216 16:43:10.557618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0216 16:43:10.555987       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0216 16:43:10.557782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0216 16:43:10.555455       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0216 16:43:10.557941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0216 16:43:10.558185       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0216 16:43:10.558305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0216 16:43:10.558874       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0216 16:43:10.558987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0216 16:43:11.439396       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 16 16:46:39 addons-105162 kubelet[2344]: I0216 16:46:39.387007    2344 scope.go:117] "RemoveContainer" containerID="660f647bb8c743507d62842fa63785abb0075aa7c7cfff16888bff8d9a80411b"
	Feb 16 16:46:39 addons-105162 kubelet[2344]: I0216 16:46:39.387355    2344 scope.go:117] "RemoveContainer" containerID="a7734de761225331432455962038ec50a04e60f9eacc44d97a57fc6b9baeb4d0"
	Feb 16 16:46:39 addons-105162 kubelet[2344]: E0216 16:46:39.387638    2344 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(b62dd41c-86c0-4b36-b0b3-aa6aa0befa69)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="b62dd41c-86c0-4b36-b0b3-aa6aa0befa69"
	Feb 16 16:46:44 addons-105162 kubelet[2344]: I0216 16:46:44.236859    2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bpjv\" (UniqueName: \"kubernetes.io/projected/b62dd41c-86c0-4b36-b0b3-aa6aa0befa69-kube-api-access-7bpjv\") pod \"b62dd41c-86c0-4b36-b0b3-aa6aa0befa69\" (UID: \"b62dd41c-86c0-4b36-b0b3-aa6aa0befa69\") "
	Feb 16 16:46:44 addons-105162 kubelet[2344]: I0216 16:46:44.238863    2344 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b62dd41c-86c0-4b36-b0b3-aa6aa0befa69-kube-api-access-7bpjv" (OuterVolumeSpecName: "kube-api-access-7bpjv") pod "b62dd41c-86c0-4b36-b0b3-aa6aa0befa69" (UID: "b62dd41c-86c0-4b36-b0b3-aa6aa0befa69"). InnerVolumeSpecName "kube-api-access-7bpjv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 16 16:46:44 addons-105162 kubelet[2344]: I0216 16:46:44.337099    2344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7bpjv\" (UniqueName: \"kubernetes.io/projected/b62dd41c-86c0-4b36-b0b3-aa6aa0befa69-kube-api-access-7bpjv\") on node \"addons-105162\" DevicePath \"\""
	Feb 16 16:46:44 addons-105162 kubelet[2344]: I0216 16:46:44.448192    2344 scope.go:117] "RemoveContainer" containerID="a7734de761225331432455962038ec50a04e60f9eacc44d97a57fc6b9baeb4d0"
	Feb 16 16:46:44 addons-105162 kubelet[2344]: I0216 16:46:44.734127    2344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b62dd41c-86c0-4b36-b0b3-aa6aa0befa69" path="/var/lib/kubelet/pods/b62dd41c-86c0-4b36-b0b3-aa6aa0befa69/volumes"
	Feb 16 16:46:45 addons-105162 kubelet[2344]: I0216 16:46:45.722500    2344 scope.go:117] "RemoveContainer" containerID="60273f6b6ae967d39e165616a4abcd6b9415768f3a6260fc5cd0a13d44c0c7ea"
	Feb 16 16:46:46 addons-105162 kubelet[2344]: I0216 16:46:46.481281    2344 scope.go:117] "RemoveContainer" containerID="60273f6b6ae967d39e165616a4abcd6b9415768f3a6260fc5cd0a13d44c0c7ea"
	Feb 16 16:46:46 addons-105162 kubelet[2344]: I0216 16:46:46.481854    2344 scope.go:117] "RemoveContainer" containerID="b97800f54e3f1279e343379ee26e155f722642149af88973ddb55e0daea3989e"
	Feb 16 16:46:46 addons-105162 kubelet[2344]: E0216 16:46:46.485182    2344 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-rbzrd_default(49899055-65a5-45c8-baca-d3c230e5801b)\"" pod="default/hello-world-app-5d77478584-rbzrd" podUID="49899055-65a5-45c8-baca-d3c230e5801b"
	Feb 16 16:46:46 addons-105162 kubelet[2344]: I0216 16:46:46.733331    2344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7962e9ad-bec4-4f42-acdc-b75db9330708" path="/var/lib/kubelet/pods/7962e9ad-bec4-4f42-acdc-b75db9330708/volumes"
	Feb 16 16:46:46 addons-105162 kubelet[2344]: I0216 16:46:46.733732    2344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fb2210b2-ab86-475d-ba0c-4edc4abbed07" path="/var/lib/kubelet/pods/fb2210b2-ab86-475d-ba0c-4edc4abbed07/volumes"
	Feb 16 16:46:48 addons-105162 kubelet[2344]: I0216 16:46:48.462739    2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77dd473-a451-4b2d-b53b-4619d1b23811-webhook-cert\") pod \"c77dd473-a451-4b2d-b53b-4619d1b23811\" (UID: \"c77dd473-a451-4b2d-b53b-4619d1b23811\") "
	Feb 16 16:46:48 addons-105162 kubelet[2344]: I0216 16:46:48.462808    2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7hx4\" (UniqueName: \"kubernetes.io/projected/c77dd473-a451-4b2d-b53b-4619d1b23811-kube-api-access-x7hx4\") pod \"c77dd473-a451-4b2d-b53b-4619d1b23811\" (UID: \"c77dd473-a451-4b2d-b53b-4619d1b23811\") "
	Feb 16 16:46:48 addons-105162 kubelet[2344]: I0216 16:46:48.465058    2344 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77dd473-a451-4b2d-b53b-4619d1b23811-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c77dd473-a451-4b2d-b53b-4619d1b23811" (UID: "c77dd473-a451-4b2d-b53b-4619d1b23811"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 16 16:46:48 addons-105162 kubelet[2344]: I0216 16:46:48.465696    2344 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c77dd473-a451-4b2d-b53b-4619d1b23811-kube-api-access-x7hx4" (OuterVolumeSpecName: "kube-api-access-x7hx4") pod "c77dd473-a451-4b2d-b53b-4619d1b23811" (UID: "c77dd473-a451-4b2d-b53b-4619d1b23811"). InnerVolumeSpecName "kube-api-access-x7hx4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 16 16:46:48 addons-105162 kubelet[2344]: I0216 16:46:48.516537    2344 scope.go:117] "RemoveContainer" containerID="cf86160c231351074367a78a2e6930f3196d4594e004de457beb5257a3a96010"
	Feb 16 16:46:48 addons-105162 kubelet[2344]: I0216 16:46:48.536907    2344 scope.go:117] "RemoveContainer" containerID="cf86160c231351074367a78a2e6930f3196d4594e004de457beb5257a3a96010"
	Feb 16 16:46:48 addons-105162 kubelet[2344]: E0216 16:46:48.537760    2344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cf86160c231351074367a78a2e6930f3196d4594e004de457beb5257a3a96010" containerID="cf86160c231351074367a78a2e6930f3196d4594e004de457beb5257a3a96010"
	Feb 16 16:46:48 addons-105162 kubelet[2344]: I0216 16:46:48.537811    2344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cf86160c231351074367a78a2e6930f3196d4594e004de457beb5257a3a96010"} err="failed to get container status \"cf86160c231351074367a78a2e6930f3196d4594e004de457beb5257a3a96010\": rpc error: code = Unknown desc = Error response from daemon: No such container: cf86160c231351074367a78a2e6930f3196d4594e004de457beb5257a3a96010"
	Feb 16 16:46:48 addons-105162 kubelet[2344]: I0216 16:46:48.563005    2344 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77dd473-a451-4b2d-b53b-4619d1b23811-webhook-cert\") on node \"addons-105162\" DevicePath \"\""
	Feb 16 16:46:48 addons-105162 kubelet[2344]: I0216 16:46:48.563045    2344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x7hx4\" (UniqueName: \"kubernetes.io/projected/c77dd473-a451-4b2d-b53b-4619d1b23811-kube-api-access-x7hx4\") on node \"addons-105162\" DevicePath \"\""
	Feb 16 16:46:48 addons-105162 kubelet[2344]: I0216 16:46:48.731464    2344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c77dd473-a451-4b2d-b53b-4619d1b23811" path="/var/lib/kubelet/pods/c77dd473-a451-4b2d-b53b-4619d1b23811/volumes"
	
	
	==> storage-provisioner [227236451ea5] <==
	I0216 16:43:34.821462       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0216 16:43:34.837197       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0216 16:43:34.837237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0216 16:43:34.846280       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0216 16:43:34.846480       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-105162_3a213c8b-797b-4689-9248-94ab254ebfcf!
	I0216 16:43:34.850421       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"259c1ab0-e112-4f90-9644-631b0b5285a5", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-105162_3a213c8b-797b-4689-9248-94ab254ebfcf became leader
	I0216 16:43:34.947613       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-105162_3a213c8b-797b-4689-9248-94ab254ebfcf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-105162 -n addons-105162
helpers_test.go:261: (dbg) Run:  kubectl --context addons-105162 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (36.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (531.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-416645 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0216 16:52:34.199578    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:54:50.355638    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:55:18.042678    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:55:23.336516    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:23.341774    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:23.352032    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:23.372354    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:23.412606    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:23.492876    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:23.653313    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:23.973855    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:24.614769    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:25.895020    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:28.456775    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:33.576991    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:55:43.817567    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:56:04.297861    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:56:45.259375    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:58:07.179898    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 16:59:50.354940    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 17:00:23.334926    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 17:00:51.020193    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p ingress-addon-legacy-416645 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m51.173315492s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-416645] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node ingress-addon-legacy-416645 in cluster ingress-addon-legacy-416645
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 16 17:00:33 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:33.203720    5227 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-416645_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	  Feb 16 17:00:35 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:35.203811    5227 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-416645_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	  Feb 16 17:00:39 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:39.203089    5227 pod_workers.go:191] Error syncing pod 0dd36e3a1106181565b3bdde468e3d7f ("etcd-ingress-addon-legacy-416645_kube-system(0dd36e3a1106181565b3bdde468e3d7f)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 16:52:10.804967   51509 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:52:10.805127   51509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:52:10.805140   51509 out.go:304] Setting ErrFile to fd 2...
	I0216 16:52:10.805147   51509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:52:10.805401   51509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 16:52:10.805812   51509 out.go:298] Setting JSON to false
	I0216 16:52:10.806666   51509 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2081,"bootTime":1708100250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 16:52:10.806727   51509 start.go:139] virtualization:  
	I0216 16:52:10.812342   51509 out.go:177] * [ingress-addon-legacy-416645] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 16:52:10.814783   51509 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 16:52:10.817811   51509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:52:10.814846   51509 notify.go:220] Checking for updates...
	I0216 16:52:10.820883   51509 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 16:52:10.822718   51509 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 16:52:10.824554   51509 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0216 16:52:10.826657   51509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 16:52:10.829316   51509 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:52:10.853371   51509 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:52:10.853486   51509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:52:10.922522   51509 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-02-16 16:52:10.913539037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:52:10.922624   51509 docker.go:295] overlay module found
	I0216 16:52:10.925128   51509 out.go:177] * Using the docker driver based on user configuration
	I0216 16:52:10.927426   51509 start.go:299] selected driver: docker
	I0216 16:52:10.927445   51509 start.go:903] validating driver "docker" against <nil>
	I0216 16:52:10.927458   51509 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 16:52:10.928086   51509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:52:11.001619   51509 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-02-16 16:52:10.99223201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:52:11.001806   51509 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 16:52:11.002050   51509 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 16:52:11.004088   51509 out.go:177] * Using Docker driver with root privileges
	I0216 16:52:11.006007   51509 cni.go:84] Creating CNI manager for ""
	I0216 16:52:11.006039   51509 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 16:52:11.006052   51509 start_flags.go:323] config:
	{Name:ingress-addon-legacy-416645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-416645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:52:11.008193   51509 out.go:177] * Starting control plane node ingress-addon-legacy-416645 in cluster ingress-addon-legacy-416645
	I0216 16:52:11.010262   51509 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 16:52:11.012280   51509 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 16:52:11.014335   51509 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 16:52:11.014432   51509 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 16:52:11.029271   51509 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 16:52:11.029294   51509 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 16:52:11.079006   51509 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0216 16:52:11.079034   51509 cache.go:56] Caching tarball of preloaded images
	I0216 16:52:11.079216   51509 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 16:52:11.081450   51509 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0216 16:52:11.083360   51509 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0216 16:52:11.198006   51509 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0216 16:52:30.344813   51509 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0216 16:52:30.344915   51509 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0216 16:52:31.449565   51509 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0216 16:52:31.449938   51509 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/config.json ...
	I0216 16:52:31.449973   51509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/config.json: {Name:mk551384c095141590acbcb3f022338e788573aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:52:31.450151   51509 cache.go:194] Successfully downloaded all kic artifacts
	I0216 16:52:31.450201   51509 start.go:365] acquiring machines lock for ingress-addon-legacy-416645: {Name:mk55a862a45da71d895e8fe6384332d031d43efc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 16:52:31.450263   51509 start.go:369] acquired machines lock for "ingress-addon-legacy-416645" in 45.457µs
	I0216 16:52:31.450284   51509 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-416645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-416645 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 16:52:31.450356   51509 start.go:125] createHost starting for "" (driver="docker")
	I0216 16:52:31.452667   51509 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0216 16:52:31.452915   51509 start.go:159] libmachine.API.Create for "ingress-addon-legacy-416645" (driver="docker")
	I0216 16:52:31.452938   51509 client.go:168] LocalClient.Create starting
	I0216 16:52:31.452989   51509 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem
	I0216 16:52:31.453024   51509 main.go:141] libmachine: Decoding PEM data...
	I0216 16:52:31.453043   51509 main.go:141] libmachine: Parsing certificate...
	I0216 16:52:31.453102   51509 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem
	I0216 16:52:31.453125   51509 main.go:141] libmachine: Decoding PEM data...
	I0216 16:52:31.453147   51509 main.go:141] libmachine: Parsing certificate...
	I0216 16:52:31.453481   51509 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-416645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 16:52:31.468139   51509 cli_runner.go:211] docker network inspect ingress-addon-legacy-416645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 16:52:31.468226   51509 network_create.go:281] running [docker network inspect ingress-addon-legacy-416645] to gather additional debugging logs...
	I0216 16:52:31.468248   51509 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-416645
	W0216 16:52:31.485675   51509 cli_runner.go:211] docker network inspect ingress-addon-legacy-416645 returned with exit code 1
	I0216 16:52:31.485702   51509 network_create.go:284] error running [docker network inspect ingress-addon-legacy-416645]: docker network inspect ingress-addon-legacy-416645: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-416645 not found
	I0216 16:52:31.485717   51509 network_create.go:286] output of [docker network inspect ingress-addon-legacy-416645]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-416645 not found
	
	** /stderr **
	I0216 16:52:31.485820   51509 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 16:52:31.499649   51509 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004a88e0}
	I0216 16:52:31.499689   51509 network_create.go:124] attempt to create docker network ingress-addon-legacy-416645 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0216 16:52:31.499743   51509 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-416645 ingress-addon-legacy-416645
	I0216 16:52:31.565603   51509 network_create.go:108] docker network ingress-addon-legacy-416645 192.168.49.0/24 created
	I0216 16:52:31.565635   51509 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-416645" container
	I0216 16:52:31.565716   51509 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 16:52:31.579765   51509 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-416645 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-416645 --label created_by.minikube.sigs.k8s.io=true
	I0216 16:52:31.596060   51509 oci.go:103] Successfully created a docker volume ingress-addon-legacy-416645
	I0216 16:52:31.596139   51509 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-416645-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-416645 --entrypoint /usr/bin/test -v ingress-addon-legacy-416645:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 16:52:32.915780   51509 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-416645-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-416645 --entrypoint /usr/bin/test -v ingress-addon-legacy-416645:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (1.319604327s)
	I0216 16:52:32.915815   51509 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-416645
	I0216 16:52:32.915837   51509 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 16:52:32.915856   51509 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 16:52:32.915946   51509 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-416645:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 16:52:37.506930   51509 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-416645:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (4.590943452s)
	I0216 16:52:37.506960   51509 kic.go:203] duration metric: took 4.591101 seconds to extract preloaded images to volume
	W0216 16:52:37.507127   51509 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0216 16:52:37.507234   51509 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 16:52:37.562992   51509 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-416645 --name ingress-addon-legacy-416645 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-416645 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-416645 --network ingress-addon-legacy-416645 --ip 192.168.49.2 --volume ingress-addon-legacy-416645:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 16:52:37.875094   51509 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-416645 --format={{.State.Running}}
	I0216 16:52:37.896232   51509 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-416645 --format={{.State.Status}}
	I0216 16:52:37.918190   51509 cli_runner.go:164] Run: docker exec ingress-addon-legacy-416645 stat /var/lib/dpkg/alternatives/iptables
	I0216 16:52:37.988288   51509 oci.go:144] the created container "ingress-addon-legacy-416645" has a running status.
	I0216 16:52:37.988314   51509 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/ingress-addon-legacy-416645/id_rsa...
	I0216 16:52:38.650919   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/ingress-addon-legacy-416645/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0216 16:52:38.650970   51509 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17936-2208/.minikube/machines/ingress-addon-legacy-416645/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 16:52:38.681731   51509 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-416645 --format={{.State.Status}}
	I0216 16:52:38.699711   51509 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 16:52:38.699730   51509 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-416645 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 16:52:38.760984   51509 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-416645 --format={{.State.Status}}
	I0216 16:52:38.779854   51509 machine.go:88] provisioning docker machine ...
	I0216 16:52:38.779886   51509 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-416645"
	I0216 16:52:38.779952   51509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 16:52:38.798823   51509 main.go:141] libmachine: Using SSH client type: native
	I0216 16:52:38.799256   51509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0216 16:52:38.799276   51509 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-416645 && echo "ingress-addon-legacy-416645" | sudo tee /etc/hostname
	I0216 16:52:38.963807   51509 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-416645
	
	I0216 16:52:38.963884   51509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 16:52:38.980217   51509 main.go:141] libmachine: Using SSH client type: native
	I0216 16:52:38.980620   51509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0216 16:52:38.980683   51509 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-416645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-416645/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-416645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 16:52:39.120777   51509 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 16:52:39.120809   51509 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-2208/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-2208/.minikube}
	I0216 16:52:39.120840   51509 ubuntu.go:177] setting up certificates
	I0216 16:52:39.120857   51509 provision.go:83] configureAuth start
	I0216 16:52:39.120924   51509 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-416645
	I0216 16:52:39.137319   51509 provision.go:138] copyHostCerts
	I0216 16:52:39.137362   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem
	I0216 16:52:39.137393   51509 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem, removing ...
	I0216 16:52:39.137403   51509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem
	I0216 16:52:39.137478   51509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem (1078 bytes)
	I0216 16:52:39.137565   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem
	I0216 16:52:39.137589   51509 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem, removing ...
	I0216 16:52:39.137596   51509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem
	I0216 16:52:39.137625   51509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem (1123 bytes)
	I0216 16:52:39.137703   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem
	I0216 16:52:39.137723   51509 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem, removing ...
	I0216 16:52:39.137731   51509 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem
	I0216 16:52:39.137758   51509 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem (1675 bytes)
	I0216 16:52:39.137818   51509 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-416645 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-416645]
	I0216 16:52:39.922883   51509 provision.go:172] copyRemoteCerts
	I0216 16:52:39.922951   51509 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 16:52:39.922997   51509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 16:52:39.938134   51509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/ingress-addon-legacy-416645/id_rsa Username:docker}
	I0216 16:52:40.039720   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0216 16:52:40.039831   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0216 16:52:40.065060   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0216 16:52:40.065120   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0216 16:52:40.091651   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0216 16:52:40.091712   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 16:52:40.116917   51509 provision.go:86] duration metric: configureAuth took 996.042161ms
	I0216 16:52:40.116943   51509 ubuntu.go:193] setting minikube options for container-runtime
	I0216 16:52:40.117140   51509 config.go:182] Loaded profile config "ingress-addon-legacy-416645": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 16:52:40.117210   51509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 16:52:40.133254   51509 main.go:141] libmachine: Using SSH client type: native
	I0216 16:52:40.133653   51509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0216 16:52:40.133670   51509 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 16:52:40.272853   51509 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 16:52:40.272871   51509 ubuntu.go:71] root file system type: overlay
	I0216 16:52:40.272987   51509 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 16:52:40.273049   51509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 16:52:40.289197   51509 main.go:141] libmachine: Using SSH client type: native
	I0216 16:52:40.289617   51509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0216 16:52:40.289701   51509 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 16:52:40.439426   51509 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 16:52:40.439531   51509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 16:52:40.455929   51509 main.go:141] libmachine: Using SSH client type: native
	I0216 16:52:40.456349   51509 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0216 16:52:40.456373   51509 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 16:52:41.168651   51509 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 16:52:40.432978892 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 16:52:41.168683   51509 machine.go:91] provisioned docker machine in 2.388806936s
	I0216 16:52:41.168694   51509 client.go:171] LocalClient.Create took 9.71575007s
	I0216 16:52:41.168721   51509 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-416645" took 9.71579542s
	I0216 16:52:41.168734   51509 start.go:300] post-start starting for "ingress-addon-legacy-416645" (driver="docker")
	I0216 16:52:41.168745   51509 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 16:52:41.168816   51509 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 16:52:41.168859   51509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 16:52:41.184419   51509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/ingress-addon-legacy-416645/id_rsa Username:docker}
	I0216 16:52:41.281533   51509 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 16:52:41.284371   51509 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 16:52:41.284405   51509 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 16:52:41.284416   51509 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 16:52:41.284425   51509 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 16:52:41.284434   51509 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/addons for local assets ...
	I0216 16:52:41.284488   51509 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/files for local assets ...
	I0216 16:52:41.284574   51509 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem -> 75132.pem in /etc/ssl/certs
	I0216 16:52:41.284590   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem -> /etc/ssl/certs/75132.pem
	I0216 16:52:41.284715   51509 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 16:52:41.292792   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /etc/ssl/certs/75132.pem (1708 bytes)
	I0216 16:52:41.315539   51509 start.go:303] post-start completed in 146.791367ms
	I0216 16:52:41.315894   51509 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-416645
	I0216 16:52:41.330585   51509 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/config.json ...
	I0216 16:52:41.330845   51509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 16:52:41.330887   51509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 16:52:41.345888   51509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/ingress-addon-legacy-416645/id_rsa Username:docker}
	I0216 16:52:41.441184   51509 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 16:52:41.445610   51509 start.go:128] duration metric: createHost completed in 9.995238573s
	I0216 16:52:41.445636   51509 start.go:83] releasing machines lock for "ingress-addon-legacy-416645", held for 9.995361914s
	I0216 16:52:41.445713   51509 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-416645
	I0216 16:52:41.461693   51509 ssh_runner.go:195] Run: cat /version.json
	I0216 16:52:41.461749   51509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 16:52:41.461986   51509 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 16:52:41.462063   51509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 16:52:41.485127   51509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/ingress-addon-legacy-416645/id_rsa Username:docker}
	I0216 16:52:41.498800   51509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/ingress-addon-legacy-416645/id_rsa Username:docker}
	I0216 16:52:41.710954   51509 ssh_runner.go:195] Run: systemctl --version
	I0216 16:52:41.715117   51509 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 16:52:41.719165   51509 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 16:52:41.746135   51509 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 16:52:41.746229   51509 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 16:52:41.763522   51509 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 16:52:41.779187   51509 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0216 16:52:41.779215   51509 start.go:475] detecting cgroup driver to use...
	I0216 16:52:41.779246   51509 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 16:52:41.779373   51509 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 16:52:41.795221   51509 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0216 16:52:41.804741   51509 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 16:52:41.813953   51509 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 16:52:41.814039   51509 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 16:52:41.823071   51509 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 16:52:41.832386   51509 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 16:52:41.841966   51509 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 16:52:41.851014   51509 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 16:52:41.859642   51509 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 16:52:41.869224   51509 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 16:52:41.877502   51509 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 16:52:41.885949   51509 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 16:52:41.964759   51509 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 16:52:42.090173   51509 start.go:475] detecting cgroup driver to use...
	I0216 16:52:42.090230   51509 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 16:52:42.090315   51509 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 16:52:42.106267   51509 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 16:52:42.106342   51509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 16:52:42.120913   51509 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 16:52:42.140781   51509 ssh_runner.go:195] Run: which cri-dockerd
	I0216 16:52:42.145149   51509 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 16:52:42.155645   51509 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 16:52:42.179036   51509 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 16:52:42.289104   51509 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 16:52:42.402593   51509 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 16:52:42.402711   51509 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 16:52:42.425877   51509 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 16:52:42.505293   51509 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 16:52:42.757699   51509 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 16:52:42.780197   51509 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 16:52:42.805607   51509 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	I0216 16:52:42.805704   51509 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-416645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 16:52:42.820427   51509 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0216 16:52:42.823953   51509 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 16:52:42.834253   51509 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0216 16:52:42.834321   51509 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 16:52:42.851526   51509 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0216 16:52:42.851563   51509 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0216 16:52:42.851617   51509 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 16:52:42.860393   51509 ssh_runner.go:195] Run: which lz4
	I0216 16:52:42.863608   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0216 16:52:42.863705   51509 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 16:52:42.867016   51509 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 16:52:42.867049   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0216 16:52:44.926066   51509 docker.go:649] Took 2.062398 seconds to copy over tarball
	I0216 16:52:44.926192   51509 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 16:52:47.348171   51509 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.421911106s)
	I0216 16:52:47.348247   51509 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 16:52:47.426477   51509 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 16:52:47.435270   51509 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0216 16:52:47.452837   51509 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 16:52:47.542307   51509 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 16:52:49.089891   51509 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.547551047s)
	I0216 16:52:49.089971   51509 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 16:52:49.106960   51509 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0216 16:52:49.106980   51509 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0216 16:52:49.106989   51509 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 16:52:49.109593   51509 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 16:52:49.109593   51509 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0216 16:52:49.109834   51509 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0216 16:52:49.109995   51509 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0216 16:52:49.110069   51509 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0216 16:52:49.110135   51509 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0216 16:52:49.110323   51509 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 16:52:49.110327   51509 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0216 16:52:49.111432   51509 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0216 16:52:49.111923   51509 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0216 16:52:49.112612   51509 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0216 16:52:49.112761   51509 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0216 16:52:49.112829   51509 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 16:52:49.112992   51509 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 16:52:49.113237   51509 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0216 16:52:49.113411   51509 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	W0216 16:52:49.470148   51509 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	W0216 16:52:49.470338   51509 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0216 16:52:49.470534   51509 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 16:52:49.470908   51509 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0216 16:52:49.475013   51509 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0216 16:52:49.475297   51509 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0216 16:52:49.494600   51509 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0216 16:52:49.497142   51509 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0216 16:52:49.497363   51509 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0216 16:52:49.505476   51509 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0216 16:52:49.505652   51509 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0216 16:52:49.508429   51509 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0216 16:52:49.508501   51509 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0216 16:52:49.508583   51509 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0216 16:52:49.508721   51509 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0216 16:52:49.508762   51509 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 16:52:49.508808   51509 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0216 16:52:49.526338   51509 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0216 16:52:49.526418   51509 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0216 16:52:49.526492   51509 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	W0216 16:52:49.531051   51509 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0216 16:52:49.531298   51509 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0216 16:52:49.551680   51509 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0216 16:52:49.551764   51509 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0216 16:52:49.551846   51509 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0216 16:52:49.552110   51509 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0216 16:52:49.552155   51509 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0216 16:52:49.552224   51509 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0216 16:52:49.580351   51509 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0216 16:52:49.580437   51509 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0216 16:52:49.580512   51509 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0216 16:52:49.605371   51509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0216 16:52:49.605474   51509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0216 16:52:49.605540   51509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0216 16:52:49.605626   51509 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0216 16:52:49.605680   51509 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0216 16:52:49.605739   51509 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	W0216 16:52:49.626365   51509 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0216 16:52:49.626584   51509 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 16:52:49.639286   51509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0216 16:52:49.639446   51509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0216 16:52:49.645154   51509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0216 16:52:49.661459   51509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0216 16:52:49.661559   51509 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0216 16:52:49.661617   51509 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 16:52:49.661675   51509 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 16:52:49.690122   51509 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0216 16:52:49.690235   51509 cache_images.go:92] LoadImages completed in 583.234061ms
	W0216 16:52:49.690320   51509 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I0216 16:52:49.690405   51509 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 16:52:49.741523   51509 cni.go:84] Creating CNI manager for ""
	I0216 16:52:49.741550   51509 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 16:52:49.742055   51509 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 16:52:49.742082   51509 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-416645 NodeName:ingress-addon-legacy-416645 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 16:52:49.742246   51509 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-416645"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 16:52:49.742322   51509 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-416645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-416645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 16:52:49.742394   51509 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0216 16:52:49.751358   51509 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 16:52:49.751461   51509 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 16:52:49.759921   51509 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0216 16:52:49.777846   51509 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0216 16:52:49.795677   51509 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0216 16:52:49.813548   51509 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0216 16:52:49.817236   51509 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 16:52:49.827641   51509 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645 for IP: 192.168.49.2
	I0216 16:52:49.827675   51509 certs.go:190] acquiring lock for shared ca certs: {Name:mkc4dfb4b2b1da0d6a80fb9567025307b764443b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:52:49.827863   51509 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key
	I0216 16:52:49.827928   51509 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key
	I0216 16:52:49.827992   51509 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/client.key
	I0216 16:52:49.828008   51509 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/client.crt with IP's: []
	I0216 16:52:50.455091   51509 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/client.crt ...
	I0216 16:52:50.455123   51509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/client.crt: {Name:mk73c932bd8bebc030905b902d35a1645232131d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:52:50.455312   51509 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/client.key ...
	I0216 16:52:50.455327   51509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/client.key: {Name:mk15b76b03baba739b3b007d5fd66278f6ecfbcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:52:50.455411   51509 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.key.dd3b5fb2
	I0216 16:52:50.455429   51509 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 16:52:50.647996   51509 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.crt.dd3b5fb2 ...
	I0216 16:52:50.648027   51509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.crt.dd3b5fb2: {Name:mk5e31dfdd1bf5c3e143ddff0150c31405c0b303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:52:50.648199   51509 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.key.dd3b5fb2 ...
	I0216 16:52:50.648213   51509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.key.dd3b5fb2: {Name:mk309730895d6f1ebfbf91130cf4b785b50cc3fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:52:50.648295   51509 certs.go:337] copying /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.crt
	I0216 16:52:50.648377   51509 certs.go:341] copying /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.key
	I0216 16:52:50.648441   51509 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/proxy-client.key
	I0216 16:52:50.648457   51509 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/proxy-client.crt with IP's: []
	I0216 16:52:50.831197   51509 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/proxy-client.crt ...
	I0216 16:52:50.831227   51509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/proxy-client.crt: {Name:mk6845de02e8a4598d17e839bc0f3e4a08bceec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:52:50.831418   51509 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/proxy-client.key ...
	I0216 16:52:50.831434   51509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/proxy-client.key: {Name:mk289edd3e5ce130ccb43752825e99a171921181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 16:52:50.831515   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0216 16:52:50.831540   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0216 16:52:50.831554   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0216 16:52:50.831583   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0216 16:52:50.831600   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0216 16:52:50.831613   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0216 16:52:50.831631   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0216 16:52:50.831643   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0216 16:52:50.831711   51509 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem (1338 bytes)
	W0216 16:52:50.831755   51509 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513_empty.pem, impossibly tiny 0 bytes
	I0216 16:52:50.831770   51509 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 16:52:50.831799   51509 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem (1078 bytes)
	I0216 16:52:50.831834   51509 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem (1123 bytes)
	I0216 16:52:50.831865   51509 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem (1675 bytes)
	I0216 16:52:50.831920   51509 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem (1708 bytes)
	I0216 16:52:50.831950   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:52:50.831964   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem -> /usr/share/ca-certificates/7513.pem
	I0216 16:52:50.831975   51509 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem -> /usr/share/ca-certificates/75132.pem
	I0216 16:52:50.832622   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 16:52:50.856230   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 16:52:50.878665   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 16:52:50.900533   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/ingress-addon-legacy-416645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0216 16:52:50.923550   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 16:52:50.947242   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0216 16:52:50.970908   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 16:52:50.994411   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0216 16:52:51.019790   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 16:52:51.045764   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem --> /usr/share/ca-certificates/7513.pem (1338 bytes)
	I0216 16:52:51.070877   51509 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /usr/share/ca-certificates/75132.pem (1708 bytes)
	I0216 16:52:51.096253   51509 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 16:52:51.115501   51509 ssh_runner.go:195] Run: openssl version
	I0216 16:52:51.121211   51509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 16:52:51.130642   51509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:52:51.134198   51509 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:52:51.134268   51509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 16:52:51.141354   51509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 16:52:51.150878   51509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7513.pem && ln -fs /usr/share/ca-certificates/7513.pem /etc/ssl/certs/7513.pem"
	I0216 16:52:51.160266   51509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7513.pem
	I0216 16:52:51.163780   51509 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:48 /usr/share/ca-certificates/7513.pem
	I0216 16:52:51.163842   51509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7513.pem
	I0216 16:52:51.170713   51509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7513.pem /etc/ssl/certs/51391683.0"
	I0216 16:52:51.180121   51509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75132.pem && ln -fs /usr/share/ca-certificates/75132.pem /etc/ssl/certs/75132.pem"
	I0216 16:52:51.189138   51509 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75132.pem
	I0216 16:52:51.192459   51509 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:48 /usr/share/ca-certificates/75132.pem
	I0216 16:52:51.192520   51509 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75132.pem
	I0216 16:52:51.199461   51509 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75132.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 16:52:51.208307   51509 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 16:52:51.211537   51509 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 16:52:51.211593   51509 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-416645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-416645 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:52:51.211711   51509 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 16:52:51.230044   51509 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 16:52:51.238663   51509 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 16:52:51.247397   51509 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 16:52:51.247468   51509 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 16:52:51.256292   51509 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 16:52:51.256336   51509 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 16:52:51.307579   51509 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0216 16:52:51.308151   51509 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 16:52:51.492622   51509 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 16:52:51.492793   51509 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 16:52:51.492890   51509 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 16:52:51.492957   51509 kubeadm.go:322] OS: Linux
	I0216 16:52:51.493041   51509 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 16:52:51.493122   51509 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 16:52:51.493206   51509 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 16:52:51.493287   51509 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 16:52:51.493370   51509 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 16:52:51.493453   51509 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 16:52:51.573366   51509 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 16:52:51.573494   51509 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 16:52:51.573609   51509 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 16:52:51.751001   51509 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 16:52:51.751186   51509 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 16:52:51.751261   51509 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 16:52:51.849049   51509 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 16:52:51.852193   51509 out.go:204]   - Generating certificates and keys ...
	I0216 16:52:51.852281   51509 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 16:52:51.857026   51509 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 16:52:52.627787   51509 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 16:52:52.870639   51509 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 16:52:53.375328   51509 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 16:52:53.751539   51509 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 16:52:54.041386   51509 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 16:52:54.041808   51509 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-416645 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0216 16:52:54.282764   51509 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 16:52:54.283072   51509 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-416645 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0216 16:52:54.629285   51509 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 16:52:55.422934   51509 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 16:52:56.327067   51509 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 16:52:56.327365   51509 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 16:52:56.779443   51509 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 16:52:56.971190   51509 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 16:52:57.477608   51509 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 16:52:58.119066   51509 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 16:52:58.119962   51509 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 16:52:58.122356   51509 out.go:204]   - Booting up control plane ...
	I0216 16:52:58.122458   51509 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 16:52:58.129405   51509 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 16:52:58.131776   51509 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 16:52:58.133531   51509 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 16:52:58.142973   51509 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 16:53:38.144388   51509 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 16:56:58.145427   51509 kubeadm.go:322] 
	I0216 16:56:58.145503   51509 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0216 16:56:58.145546   51509 kubeadm.go:322] 		timed out waiting for the condition
	I0216 16:56:58.145552   51509 kubeadm.go:322] 
	I0216 16:56:58.145585   51509 kubeadm.go:322] 	This error is likely caused by:
	I0216 16:56:58.145624   51509 kubeadm.go:322] 		- The kubelet is not running
	I0216 16:56:58.145731   51509 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 16:56:58.145740   51509 kubeadm.go:322] 
	I0216 16:56:58.145844   51509 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 16:56:58.145878   51509 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0216 16:56:58.145915   51509 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0216 16:56:58.145940   51509 kubeadm.go:322] 
	I0216 16:56:58.146071   51509 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 16:56:58.146168   51509 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0216 16:56:58.146178   51509 kubeadm.go:322] 
	I0216 16:56:58.146263   51509 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0216 16:56:58.146322   51509 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0216 16:56:58.146426   51509 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0216 16:56:58.146470   51509 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0216 16:56:58.146485   51509 kubeadm.go:322] 
	I0216 16:56:58.149544   51509 kubeadm.go:322] W0216 16:52:51.306805    1707 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0216 16:56:58.149732   51509 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 16:56:58.149859   51509 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0216 16:56:58.150058   51509 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 16:56:58.150158   51509 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 16:56:58.150278   51509 kubeadm.go:322] W0216 16:52:58.129655    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 16:56:58.150400   51509 kubeadm.go:322] W0216 16:52:58.131994    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 16:56:58.150480   51509 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 16:56:58.150546   51509 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 16:56:58.150673   51509 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-416645 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-416645 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:52:51.306805    1707 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:52:58.129655    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:52:58.131994    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-416645 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-416645 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:52:51.306805    1707 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:52:58.129655    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:52:58.131994    1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 16:56:58.150724   51509 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 16:56:58.941750   51509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 16:56:58.953524   51509 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 16:56:58.953589   51509 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 16:56:58.962393   51509 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 16:56:58.962438   51509 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 16:56:59.011264   51509 kubeadm.go:322] W0216 16:56:59.010812    5084 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0216 16:56:59.134644   51509 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 16:56:59.188856   51509 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0216 16:56:59.189310   51509 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 16:56:59.273528   51509 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 16:57:01.543952   51509 kubeadm.go:322] W0216 16:57:01.531958    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 16:57:01.544100   51509 kubeadm.go:322] W0216 16:57:01.533540    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0216 17:01:01.548348   51509 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:01:01.548436   51509 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:01:01.551366   51509 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0216 17:01:01.551424   51509 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:01:01.551512   51509 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:01:01.551583   51509 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 17:01:01.551633   51509 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:01:01.551669   51509 kubeadm.go:322] OS: Linux
	I0216 17:01:01.551714   51509 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:01:01.551762   51509 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:01:01.551816   51509 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:01:01.551865   51509 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:01:01.551917   51509 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:01:01.551973   51509 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:01:01.552045   51509 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:01:01.552136   51509 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:01:01.552233   51509 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:01:01.552332   51509 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:01:01.552417   51509 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:01:01.552458   51509 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 17:01:01.552523   51509 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:01:01.554821   51509 out.go:204]   - Generating certificates and keys ...
	I0216 17:01:01.554914   51509 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:01:01.554984   51509 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:01:01.555061   51509 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:01:01.555123   51509 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:01:01.555195   51509 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:01:01.555251   51509 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:01:01.555311   51509 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:01:01.555378   51509 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:01:01.555476   51509 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:01:01.555574   51509 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:01:01.555624   51509 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:01:01.555693   51509 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:01:01.555751   51509 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:01:01.555804   51509 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:01:01.555868   51509 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:01:01.555926   51509 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:01:01.555997   51509 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:01:01.558242   51509 out.go:204]   - Booting up control plane ...
	I0216 17:01:01.558350   51509 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:01:01.558436   51509 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:01:01.558508   51509 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:01:01.558593   51509 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:01:01.558745   51509 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:01:01.558793   51509 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:01:01.558804   51509 kubeadm.go:322] 
	I0216 17:01:01.558869   51509 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0216 17:01:01.558922   51509 kubeadm.go:322] 		timed out waiting for the condition
	I0216 17:01:01.558934   51509 kubeadm.go:322] 
	I0216 17:01:01.558968   51509 kubeadm.go:322] 	This error is likely caused by:
	I0216 17:01:01.559003   51509 kubeadm.go:322] 		- The kubelet is not running
	I0216 17:01:01.559105   51509 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:01:01.559113   51509 kubeadm.go:322] 
	I0216 17:01:01.559235   51509 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:01:01.559291   51509 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0216 17:01:01.559326   51509 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0216 17:01:01.559346   51509 kubeadm.go:322] 
	I0216 17:01:01.559458   51509 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:01:01.559540   51509 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0216 17:01:01.559548   51509 kubeadm.go:322] 
	I0216 17:01:01.559624   51509 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:01:01.559677   51509 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:01:01.559752   51509 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0216 17:01:01.559787   51509 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0216 17:01:01.559856   51509 kubeadm.go:322] 
	I0216 17:01:01.559857   51509 kubeadm.go:406] StartCluster complete in 8m10.348273918s
	I0216 17:01:01.559977   51509 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:01:01.577383   51509 logs.go:276] 0 containers: []
	W0216 17:01:01.577404   51509 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:01:01.577462   51509 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:01:01.593671   51509 logs.go:276] 0 containers: []
	W0216 17:01:01.593698   51509 logs.go:278] No container was found matching "etcd"
	I0216 17:01:01.593757   51509 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:01:01.610072   51509 logs.go:276] 0 containers: []
	W0216 17:01:01.610094   51509 logs.go:278] No container was found matching "coredns"
	I0216 17:01:01.610156   51509 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:01:01.626059   51509 logs.go:276] 0 containers: []
	W0216 17:01:01.626084   51509 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:01:01.626181   51509 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:01:01.644287   51509 logs.go:276] 0 containers: []
	W0216 17:01:01.644312   51509 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:01:01.644370   51509 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:01:01.661386   51509 logs.go:276] 0 containers: []
	W0216 17:01:01.661410   51509 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:01:01.661472   51509 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:01:01.678524   51509 logs.go:276] 0 containers: []
	W0216 17:01:01.678545   51509 logs.go:278] No container was found matching "kindnet"
	I0216 17:01:01.678556   51509 logs.go:123] Gathering logs for kubelet ...
	I0216 17:01:01.678568   51509 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:01:01.704035   51509 logs.go:138] Found kubelet problem: Feb 16 17:00:33 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:33.203720    5227 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-416645_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	W0216 17:01:01.707324   51509 logs.go:138] Found kubelet problem: Feb 16 17:00:35 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:35.203811    5227 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-416645_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0216 17:01:01.711889   51509 logs.go:138] Found kubelet problem: Feb 16 17:00:39 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:39.203089    5227 pod_workers.go:191] Error syncing pod 0dd36e3a1106181565b3bdde468e3d7f ("etcd-ingress-addon-legacy-416645_kube-system(0dd36e3a1106181565b3bdde468e3d7f)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	W0216 17:01:01.715446   51509 logs.go:138] Found kubelet problem: Feb 16 17:00:41 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:41.203575    5227 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-416645_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0216 17:01:01.724321   51509 logs.go:138] Found kubelet problem: Feb 16 17:00:48 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:48.203479    5227 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-416645_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	W0216 17:01:01.726308   51509 logs.go:138] Found kubelet problem: Feb 16 17:00:49 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:49.204925    5227 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-416645_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0216 17:01:01.732232   51509 logs.go:138] Found kubelet problem: Feb 16 17:00:54 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:54.204133    5227 pod_workers.go:191] Error syncing pod 0dd36e3a1106181565b3bdde468e3d7f ("etcd-ingress-addon-legacy-416645_kube-system(0dd36e3a1106181565b3bdde468e3d7f)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	W0216 17:01:01.734585   51509 logs.go:138] Found kubelet problem: Feb 16 17:00:55 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:55.205521    5227 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-416645_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	I0216 17:01:01.741297   51509 logs.go:123] Gathering logs for dmesg ...
	I0216 17:01:01.741322   51509 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:01:01.756160   51509 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:01:01.756189   51509 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:01:01.827965   51509 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:01:01.827994   51509 logs.go:123] Gathering logs for Docker ...
	I0216 17:01:01.828009   51509 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:01:01.848587   51509 logs.go:123] Gathering logs for container status ...
	I0216 17:01:01.848619   51509 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0216 17:01:01.892382   51509 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:56:59.010812    5084 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:57:01.531958    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:57:01.533540    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 17:01:01.892503   51509 out.go:239] * 
	* 
	W0216 17:01:01.892597   51509 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:56:59.010812    5084 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:57:01.531958    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:57:01.533540    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:56:59.010812    5084 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:57:01.531958    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:57:01.533540    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:01:01.892799   51509 out.go:239] * 
	* 
	W0216 17:01:01.893788   51509 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 17:01:01.896737   51509 out.go:177] X Problems detected in kubelet:
	I0216 17:01:01.899220   51509 out.go:177]   Feb 16 17:00:33 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:33.203720    5227 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-416645_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	I0216 17:01:01.901509   51509 out.go:177]   Feb 16 17:00:35 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:35.203811    5227 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-416645_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	I0216 17:01:01.903866   51509 out.go:177]   Feb 16 17:00:39 ingress-addon-legacy-416645 kubelet[5227]: E0216 17:00:39.203089    5227 pod_workers.go:191] Error syncing pod 0dd36e3a1106181565b3bdde468e3d7f ("etcd-ingress-addon-legacy-416645_kube-system(0dd36e3a1106181565b3bdde468e3d7f)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	I0216 17:01:01.907795   51509 out.go:177] 
	W0216 17:01:01.909994   51509 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:56:59.010812    5084 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:57:01.531958    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:57:01.533540    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0216 16:56:59.010812    5084 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0216 16:57:01.531958    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0216 16:57:01.533540    5084 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:01:01.910068   51509 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 17:01:01.910093   51509 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 17:01:01.912352   51509 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-arm64 start -p ingress-addon-legacy-416645 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (531.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (69.1s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-416645 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-416645 addons enable ingress --alsologtostderr -v=5: signal: killed (1m8.775082239s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:01:02.050721   60481 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:01:02.050899   60481 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:01:02.050908   60481 out.go:304] Setting ErrFile to fd 2...
	I0216 17:01:02.050914   60481 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:01:02.051158   60481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 17:01:02.051456   60481 mustload.go:65] Loading cluster: ingress-addon-legacy-416645
	I0216 17:01:02.051830   60481 config.go:182] Loaded profile config "ingress-addon-legacy-416645": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 17:01:02.051851   60481 addons.go:597] checking whether the cluster is paused
	I0216 17:01:02.051951   60481 config.go:182] Loaded profile config "ingress-addon-legacy-416645": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 17:01:02.051970   60481 host.go:66] Checking if "ingress-addon-legacy-416645" exists ...
	I0216 17:01:02.052495   60481 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-416645 --format={{.State.Status}}
	I0216 17:01:02.072531   60481 ssh_runner.go:195] Run: systemctl --version
	I0216 17:01:02.072589   60481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 17:01:02.092802   60481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/ingress-addon-legacy-416645/id_rsa Username:docker}
	I0216 17:01:02.189029   60481 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:01:02.207648   60481 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0216 17:01:02.210151   60481 config.go:182] Loaded profile config "ingress-addon-legacy-416645": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0216 17:01:02.210170   60481 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-416645"
	I0216 17:01:02.210179   60481 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-416645"
	I0216 17:01:02.210212   60481 host.go:66] Checking if "ingress-addon-legacy-416645" exists ...
	I0216 17:01:02.210651   60481 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-416645 --format={{.State.Status}}
	I0216 17:01:02.227942   60481 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0216 17:01:02.229829   60481 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0216 17:01:02.231682   60481 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0216 17:01:02.233793   60481 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0216 17:01:02.233816   60481 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0216 17:01:02.233883   60481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-416645
	I0216 17:01:02.249381   60481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/ingress-addon-legacy-416645/id_rsa Username:docker}
	I0216 17:01:02.358419   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:02.420765   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:02.420797   60481 retry.go:31] will retry after 335.771855ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:02.757458   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:02.830970   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:02.831000   60481 retry.go:31] will retry after 497.120201ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:03.328715   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:03.390456   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:03.390486   60481 retry.go:31] will retry after 512.089335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:03.903222   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:03.967337   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:03.967366   60481 retry.go:31] will retry after 1.263223102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:05.231797   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:05.297870   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:05.297908   60481 retry.go:31] will retry after 867.475791ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:06.165970   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:06.237751   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:06.237782   60481 retry.go:31] will retry after 1.591421184s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:07.829845   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:07.893208   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:07.893239   60481 retry.go:31] will retry after 2.187581421s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:10.081106   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:10.147648   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:10.147702   60481 retry.go:31] will retry after 5.169520314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:15.319943   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:15.382452   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:15.382482   60481 retry.go:31] will retry after 9.264052737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:24.648978   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:24.711263   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:24.711292   60481 retry.go:31] will retry after 7.48499229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:32.197235   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:32.261461   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:32.261491   60481 retry.go:31] will retry after 11.09252394s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:43.354273   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:43.418127   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:43.418156   60481 retry.go:31] will retry after 15.35705136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:58.775481   60481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0216 17:01:58.837636   60481 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0216 17:01:58.837667   60481 retry.go:31] will retry after 22.955800009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-416645
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-416645:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ea9f62594711664bd83cdbb4672a2cf0734900072169fd319e2fc77150a76d2",
	        "Created": "2024-02-16T16:52:37.57655514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51956,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T16:52:37.865136736Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/3ea9f62594711664bd83cdbb4672a2cf0734900072169fd319e2fc77150a76d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ea9f62594711664bd83cdbb4672a2cf0734900072169fd319e2fc77150a76d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ea9f62594711664bd83cdbb4672a2cf0734900072169fd319e2fc77150a76d2/hosts",
	        "LogPath": "/var/lib/docker/containers/3ea9f62594711664bd83cdbb4672a2cf0734900072169fd319e2fc77150a76d2/3ea9f62594711664bd83cdbb4672a2cf0734900072169fd319e2fc77150a76d2-json.log",
	        "Name": "/ingress-addon-legacy-416645",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-416645:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-416645",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/67f17265fa2958898de7c3e2a9dc23f060abce20f5266c55ae363d4618cded99-init/diff:/var/lib/docker/overlay2/946a7b4f2791bd4745aa26fd1fdd5eefb03c154f3c1fd517458d1937bbb85039/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67f17265fa2958898de7c3e2a9dc23f060abce20f5266c55ae363d4618cded99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67f17265fa2958898de7c3e2a9dc23f060abce20f5266c55ae363d4618cded99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67f17265fa2958898de7c3e2a9dc23f060abce20f5266c55ae363d4618cded99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-416645",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-416645/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-416645",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-416645",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-416645",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce3a0d5a647cfd057796a83e6dc4a712fc4987b0b78b34413fd3c60b4a718328",
	            "SandboxKey": "/var/run/docker/netns/ce3a0d5a647c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-416645": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ea9f6259471",
	                        "ingress-addon-legacy-416645"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "fa2abe6d5a0451fb90daf671786e3252d8c50472cae99546574876eb97e1266e",
	                    "EndpointID": "4cf1dc979380756c0a9db51b3313979b63f0d7c880fa2169f4a240f93e8e2b9c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-416645",
	                        "3ea9f6259471"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-416645 -n ingress-addon-legacy-416645
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-416645 -n ingress-addon-legacy-416645: exit status 6 (307.640239ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:02:11.072983   61381 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-416645" does not appear in /home/jenkins/minikube-integration/17936-2208/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-416645" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (69.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (598.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-283660 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0216 17:27:23.559960    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:44.040418    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:28:25.000819    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:28:26.382017    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-283660 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 109 (8m40.033473746s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-283660] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node kubernetes-upgrade-283660 in cluster kubernetes-upgrade-283660
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 16 17:35:35 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:35.022350    5371 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-283660_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:35:35 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:35.025677    5371 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-283660_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:35:38 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:38.996891    5371 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-kubernetes-upgrade-283660_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:27:18.330091  201972 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:27:18.330244  201972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:27:18.330269  201972 out.go:304] Setting ErrFile to fd 2...
	I0216 17:27:18.330289  201972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:27:18.331407  201972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 17:27:18.331914  201972 out.go:298] Setting JSON to false
	I0216 17:27:18.333040  201972 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4188,"bootTime":1708100250,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 17:27:18.333147  201972 start.go:139] virtualization:  
	I0216 17:27:18.336244  201972 out.go:177] * [kubernetes-upgrade-283660] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 17:27:18.339392  201972 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 17:27:18.341426  201972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 17:27:18.339516  201972 notify.go:220] Checking for updates...
	I0216 17:27:18.343565  201972 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 17:27:18.345781  201972 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 17:27:18.347748  201972 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0216 17:27:18.350012  201972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 17:27:18.352879  201972 config.go:182] Loaded profile config "cert-expiration-192643": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:27:18.353017  201972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 17:27:18.374563  201972 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 17:27:18.374677  201972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:27:18.443985  201972 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 17:27:18.434288595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 17:27:18.444102  201972 docker.go:295] overlay module found
	I0216 17:27:18.446458  201972 out.go:177] * Using the docker driver based on user configuration
	I0216 17:27:18.448301  201972 start.go:299] selected driver: docker
	I0216 17:27:18.448316  201972 start.go:903] validating driver "docker" against <nil>
	I0216 17:27:18.448329  201972 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 17:27:18.449043  201972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:27:18.521818  201972 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 17:27:18.512152145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 17:27:18.522007  201972 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 17:27:18.522231  201972 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 17:27:18.524674  201972 out.go:177] * Using Docker driver with root privileges
	I0216 17:27:18.526940  201972 cni.go:84] Creating CNI manager for ""
	I0216 17:27:18.526971  201972 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:27:18.526982  201972 start_flags.go:323] config:
	{Name:kubernetes-upgrade-283660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-283660 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:27:18.529672  201972 out.go:177] * Starting control plane node kubernetes-upgrade-283660 in cluster kubernetes-upgrade-283660
	I0216 17:27:18.531646  201972 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 17:27:18.533824  201972 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 17:27:18.536163  201972 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:27:18.536210  201972 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0216 17:27:18.536229  201972 cache.go:56] Caching tarball of preloaded images
	I0216 17:27:18.536266  201972 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 17:27:18.536309  201972 preload.go:174] Found /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0216 17:27:18.536319  201972 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 17:27:18.536442  201972 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/config.json ...
	I0216 17:27:18.536459  201972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/config.json: {Name:mk117994d86517250bfbc1674cb9738b555781b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:27:18.552846  201972 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 17:27:18.552872  201972 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 17:27:18.552898  201972 cache.go:194] Successfully downloaded all kic artifacts
	I0216 17:27:18.552932  201972 start.go:365] acquiring machines lock for kubernetes-upgrade-283660: {Name:mk64c8741e45a20bf32567249443c2ff1bb1399b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 17:27:18.553043  201972 start.go:369] acquired machines lock for "kubernetes-upgrade-283660" in 89.494µs
	I0216 17:27:18.553073  201972 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-283660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-283660 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 17:27:18.553155  201972 start.go:125] createHost starting for "" (driver="docker")
	I0216 17:27:18.555722  201972 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0216 17:27:18.555993  201972 start.go:159] libmachine.API.Create for "kubernetes-upgrade-283660" (driver="docker")
	I0216 17:27:18.556032  201972 client.go:168] LocalClient.Create starting
	I0216 17:27:18.556093  201972 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem
	I0216 17:27:18.556126  201972 main.go:141] libmachine: Decoding PEM data...
	I0216 17:27:18.556143  201972 main.go:141] libmachine: Parsing certificate...
	I0216 17:27:18.556195  201972 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem
	I0216 17:27:18.556216  201972 main.go:141] libmachine: Decoding PEM data...
	I0216 17:27:18.556235  201972 main.go:141] libmachine: Parsing certificate...
	I0216 17:27:18.556610  201972 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-283660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 17:27:18.571366  201972 cli_runner.go:211] docker network inspect kubernetes-upgrade-283660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 17:27:18.571443  201972 network_create.go:281] running [docker network inspect kubernetes-upgrade-283660] to gather additional debugging logs...
	I0216 17:27:18.571462  201972 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-283660
	W0216 17:27:18.586328  201972 cli_runner.go:211] docker network inspect kubernetes-upgrade-283660 returned with exit code 1
	I0216 17:27:18.586358  201972 network_create.go:284] error running [docker network inspect kubernetes-upgrade-283660]: docker network inspect kubernetes-upgrade-283660: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-283660 not found
	I0216 17:27:18.586371  201972 network_create.go:286] output of [docker network inspect kubernetes-upgrade-283660]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-283660 not found
	
	** /stderr **
	I0216 17:27:18.586495  201972 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:27:18.602506  201972 network.go:212] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bf2219ceb1d4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fc:0a:69:d6} reservation:<nil>}
	I0216 17:27:18.602847  201972 network.go:212] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88cc490de1c4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6d:6b:26:04} reservation:<nil>}
	I0216 17:27:18.603189  201972 network.go:212] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5c899076fef1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a8:be:71:53} reservation:<nil>}
	I0216 17:27:18.603607  201972 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025c5270}
	I0216 17:27:18.603627  201972 network_create.go:124] attempt to create docker network kubernetes-upgrade-283660 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0216 17:27:18.603687  201972 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-283660 kubernetes-upgrade-283660
	I0216 17:27:18.665871  201972 network_create.go:108] docker network kubernetes-upgrade-283660 192.168.76.0/24 created
	I0216 17:27:18.665916  201972 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-283660" container
	I0216 17:27:18.665989  201972 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 17:27:18.681016  201972 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-283660 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-283660 --label created_by.minikube.sigs.k8s.io=true
	I0216 17:27:18.705997  201972 oci.go:103] Successfully created a docker volume kubernetes-upgrade-283660
	I0216 17:27:18.706087  201972 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-283660-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-283660 --entrypoint /usr/bin/test -v kubernetes-upgrade-283660:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 17:27:19.262344  201972 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-283660
	I0216 17:27:19.262388  201972 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:27:19.262410  201972 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 17:27:19.262509  201972 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-283660:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 17:27:25.447107  201972 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-283660:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (6.184547248s)
	I0216 17:27:25.447140  201972 kic.go:203] duration metric: took 6.184727 seconds to extract preloaded images to volume
	W0216 17:27:25.447293  201972 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0216 17:27:25.447399  201972 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 17:27:25.503215  201972 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-283660 --name kubernetes-upgrade-283660 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-283660 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-283660 --network kubernetes-upgrade-283660 --ip 192.168.76.2 --volume kubernetes-upgrade-283660:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 17:27:25.840336  201972 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-283660 --format={{.State.Running}}
	I0216 17:27:25.864327  201972 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-283660 --format={{.State.Status}}
	I0216 17:27:25.884996  201972 cli_runner.go:164] Run: docker exec kubernetes-upgrade-283660 stat /var/lib/dpkg/alternatives/iptables
	I0216 17:27:25.963104  201972 oci.go:144] the created container "kubernetes-upgrade-283660" has a running status.
	I0216 17:27:25.963134  201972 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/kubernetes-upgrade-283660/id_rsa...
	I0216 17:27:26.400765  201972 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17936-2208/.minikube/machines/kubernetes-upgrade-283660/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 17:27:26.435448  201972 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-283660 --format={{.State.Status}}
	I0216 17:27:26.465143  201972 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 17:27:26.465165  201972 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-283660 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 17:27:26.551650  201972 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-283660 --format={{.State.Status}}
	I0216 17:27:26.575581  201972 machine.go:88] provisioning docker machine ...
	I0216 17:27:26.575610  201972 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-283660"
	I0216 17:27:26.575672  201972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:27:26.593025  201972 main.go:141] libmachine: Using SSH client type: native
	I0216 17:27:26.593481  201972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0216 17:27:26.593494  201972 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-283660 && echo "kubernetes-upgrade-283660" | sudo tee /etc/hostname
	I0216 17:27:26.594096  201972 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0216 17:27:29.748912  201972 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-283660
	
	I0216 17:27:29.749072  201972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:27:29.765859  201972 main.go:141] libmachine: Using SSH client type: native
	I0216 17:27:29.766268  201972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0216 17:27:29.766290  201972 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-283660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-283660/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-283660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 17:27:29.912115  201972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:27:29.912139  201972 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-2208/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-2208/.minikube}
	I0216 17:27:29.912171  201972 ubuntu.go:177] setting up certificates
	I0216 17:27:29.912181  201972 provision.go:83] configureAuth start
	I0216 17:27:29.912240  201972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-283660
	I0216 17:27:29.946263  201972 provision.go:138] copyHostCerts
	I0216 17:27:29.946327  201972 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem, removing ...
	I0216 17:27:29.946340  201972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem
	I0216 17:27:29.946415  201972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem (1078 bytes)
	I0216 17:27:29.946513  201972 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem, removing ...
	I0216 17:27:29.946524  201972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem
	I0216 17:27:29.946551  201972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem (1123 bytes)
	I0216 17:27:29.946609  201972 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem, removing ...
	I0216 17:27:29.946619  201972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem
	I0216 17:27:29.946695  201972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem (1675 bytes)
	I0216 17:27:29.946766  201972 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-283660 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-283660]
	I0216 17:27:30.622649  201972 provision.go:172] copyRemoteCerts
	I0216 17:27:30.622719  201972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 17:27:30.622775  201972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:27:30.643069  201972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/kubernetes-upgrade-283660/id_rsa Username:docker}
	I0216 17:27:30.745320  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0216 17:27:30.768946  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0216 17:27:30.792533  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 17:27:30.815952  201972 provision.go:86] duration metric: configureAuth took 903.758127ms
	I0216 17:27:30.815987  201972 ubuntu.go:193] setting minikube options for container-runtime
	I0216 17:27:30.816196  201972 config.go:182] Loaded profile config "kubernetes-upgrade-283660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:27:30.816261  201972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:27:30.835840  201972 main.go:141] libmachine: Using SSH client type: native
	I0216 17:27:30.836235  201972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0216 17:27:30.836252  201972 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 17:27:30.977152  201972 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 17:27:30.977176  201972 ubuntu.go:71] root file system type: overlay
	I0216 17:27:30.977289  201972 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 17:27:30.977356  201972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:27:30.993964  201972 main.go:141] libmachine: Using SSH client type: native
	I0216 17:27:30.994396  201972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0216 17:27:30.994477  201972 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 17:27:31.148441  201972 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 17:27:31.148521  201972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:27:31.164966  201972 main.go:141] libmachine: Using SSH client type: native
	I0216 17:27:31.165385  201972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 32977 <nil> <nil>}
	I0216 17:27:31.165409  201972 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 17:27:31.961469  201972 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 17:27:31.143046200 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 17:27:31.961503  201972 machine.go:91] provisioned docker machine in 5.385902676s
	I0216 17:27:31.961514  201972 client.go:171] LocalClient.Create took 13.405472429s
	I0216 17:27:31.961536  201972 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-283660" took 13.405541845s
	I0216 17:27:31.961547  201972 start.go:300] post-start starting for "kubernetes-upgrade-283660" (driver="docker")
	I0216 17:27:31.961558  201972 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 17:27:31.961631  201972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 17:27:31.961676  201972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:27:31.978814  201972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/kubernetes-upgrade-283660/id_rsa Username:docker}
	I0216 17:27:32.077379  201972 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 17:27:32.080330  201972 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 17:27:32.080365  201972 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 17:27:32.080376  201972 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 17:27:32.080384  201972 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 17:27:32.080397  201972 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/addons for local assets ...
	I0216 17:27:32.080452  201972 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/files for local assets ...
	I0216 17:27:32.080544  201972 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem -> 75132.pem in /etc/ssl/certs
	I0216 17:27:32.080681  201972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 17:27:32.088789  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /etc/ssl/certs/75132.pem (1708 bytes)
	I0216 17:27:32.112110  201972 start.go:303] post-start completed in 150.548102ms
	I0216 17:27:32.112497  201972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-283660
	I0216 17:27:32.130412  201972 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/config.json ...
	I0216 17:27:32.130675  201972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:27:32.130726  201972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:27:32.146450  201972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/kubernetes-upgrade-283660/id_rsa Username:docker}
	I0216 17:27:32.241247  201972 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 17:27:32.245677  201972 start.go:128] duration metric: createHost completed in 13.69250482s
	I0216 17:27:32.245703  201972 start.go:83] releasing machines lock for "kubernetes-upgrade-283660", held for 13.692645565s
	I0216 17:27:32.245781  201972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-283660
	I0216 17:27:32.262822  201972 ssh_runner.go:195] Run: cat /version.json
	I0216 17:27:32.262884  201972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:27:32.262832  201972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 17:27:32.263035  201972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:27:32.285943  201972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/kubernetes-upgrade-283660/id_rsa Username:docker}
	I0216 17:27:32.288394  201972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32977 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/kubernetes-upgrade-283660/id_rsa Username:docker}
	I0216 17:27:32.380227  201972 ssh_runner.go:195] Run: systemctl --version
	I0216 17:27:32.520678  201972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 17:27:32.524976  201972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 17:27:32.560127  201972 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 17:27:32.560270  201972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 17:27:32.577203  201972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 17:27:32.592680  201972 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0216 17:27:32.592742  201972 start.go:475] detecting cgroup driver to use...
	I0216 17:27:32.592784  201972 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:27:32.592914  201972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:27:32.608544  201972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 17:27:32.618181  201972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 17:27:32.627647  201972 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 17:27:32.627741  201972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 17:27:32.637607  201972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:27:32.646883  201972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 17:27:32.656368  201972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:27:32.666132  201972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 17:27:32.675426  201972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 17:27:32.685267  201972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 17:27:32.694089  201972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 17:27:32.702340  201972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:27:32.785419  201972 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 17:27:32.892241  201972 start.go:475] detecting cgroup driver to use...
	I0216 17:27:32.892286  201972 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:27:32.892343  201972 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 17:27:32.908051  201972 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 17:27:32.908128  201972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 17:27:32.921278  201972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:27:32.940681  201972 ssh_runner.go:195] Run: which cri-dockerd
	I0216 17:27:32.944184  201972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 17:27:32.953683  201972 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 17:27:32.977205  201972 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 17:27:33.082339  201972 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 17:27:33.181489  201972 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 17:27:33.181677  201972 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 17:27:33.201223  201972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:27:33.295721  201972 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:27:33.555731  201972 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:27:33.584622  201972 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:27:33.609916  201972 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 17:27:33.610064  201972 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-283660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:27:33.625222  201972 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0216 17:27:33.628581  201972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:27:33.638983  201972 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:27:33.639054  201972 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:27:33.656220  201972 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:27:33.656237  201972 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:27:33.656291  201972 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:27:33.665372  201972 ssh_runner.go:195] Run: which lz4
	I0216 17:27:33.668751  201972 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 17:27:33.671732  201972 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 17:27:33.671768  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (394173841 bytes)
	I0216 17:27:35.709851  201972 docker.go:649] Took 2.041214 seconds to copy over tarball
	I0216 17:27:35.709943  201972 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 17:27:38.126967  201972 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.416991256s)
	I0216 17:27:38.126993  201972 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 17:27:38.218828  201972 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:27:38.227824  201972 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 17:27:38.252282  201972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:27:38.355019  201972 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:27:40.235236  201972 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.877279712s)
	I0216 17:27:40.235318  201972 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:27:40.261636  201972 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:27:40.261655  201972 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:27:40.261673  201972 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 17:27:40.263094  201972 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:27:40.263296  201972 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 17:27:40.263432  201972 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:27:40.263586  201972 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:27:40.263677  201972 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:27:40.263746  201972 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:27:40.263911  201972 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:27:40.264124  201972 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:27:40.264629  201972 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:27:40.264829  201972 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0216 17:27:40.265475  201972 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:27:40.265838  201972 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:27:40.266294  201972 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:27:40.266729  201972 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 17:27:40.266921  201972 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:27:40.267646  201972 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	W0216 17:27:40.615774  201972 image.go:265] image registry.k8s.io/kube-proxy:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:27:40.615952  201972 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	W0216 17:27:40.634197  201972 image.go:265] image registry.k8s.io/etcd:3.3.15-0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:27:40.634396  201972 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	W0216 17:27:40.641805  201972 image.go:265] image registry.k8s.io/kube-apiserver:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:27:40.641972  201972 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:27:40.644750  201972 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "553e5791dff2eebc7969b9df892ad18a487fcfa425e098ed3059173e36d98f72" in container runtime
	I0216 17:27:40.644843  201972 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:27:40.644921  201972 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	W0216 17:27:40.648923  201972 image.go:265] image registry.k8s.io/kube-controller-manager:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:27:40.649158  201972 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	W0216 17:27:40.652326  201972 image.go:265] image registry.k8s.io/kube-scheduler:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:27:40.652571  201972 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	W0216 17:27:40.658435  201972 image.go:265] image registry.k8s.io/pause:3.1 arch mismatch: want arm64 got amd64. fixing
	I0216 17:27:40.658658  201972 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	W0216 17:27:40.672396  201972 image.go:265] image registry.k8s.io/coredns:1.6.2 arch mismatch: want arm64 got amd64. fixing
	I0216 17:27:40.672677  201972 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0216 17:27:40.688202  201972 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "3f4e1b5a89fe11634ed042397d01167d866dfa3225cfed8279f54ec7f8f58486" in container runtime
	I0216 17:27:40.688291  201972 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:27:40.688368  201972 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 17:27:40.694224  201972 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "06c3d6081b24d1d3f9c703ae2e40666f3237db9490060a03c4b29894a78205ef" in container runtime
	I0216 17:27:40.694308  201972 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:27:40.694390  201972 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:27:40.697211  201972 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.16.0
	I0216 17:27:40.754661  201972 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "e82518cbd8204462b7b3756330f327ee6de72bbb84aaebc4c8cadf77c821a661" in container runtime
	I0216 17:27:40.754745  201972 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:27:40.754834  201972 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:27:40.754961  201972 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5" in container runtime
	I0216 17:27:40.754998  201972 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 17:27:40.755052  201972 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0216 17:27:40.755163  201972 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "051b2962d7b329402cf101d688a2de7bc400efea9dd4de77745af5d77489a847" in container runtime
	I0216 17:27:40.755213  201972 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 17:27:40.755253  201972 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 17:27:40.755357  201972 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.3.15-0
	I0216 17:27:40.755435  201972 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "5f93833eff730f6c51ed0232bb218db5ab7bbb05ed0d460c4678d8b433670640" in container runtime
	I0216 17:27:40.755469  201972 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:27:40.755532  201972 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:27:40.762359  201972 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.16.0
	W0216 17:27:40.803563  201972 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0216 17:27:40.803813  201972 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:27:40.808727  201972 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 17:27:40.808803  201972 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 17:27:40.808840  201972 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/pause_3.1
	I0216 17:27:40.809047  201972 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.2
	I0216 17:27:40.823953  201972 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0216 17:27:40.823995  201972 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:27:40.824044  201972 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:27:40.854351  201972 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0216 17:27:40.854465  201972 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0216 17:27:40.858012  201972 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0216 17:27:40.858046  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0216 17:27:40.938737  201972 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0216 17:27:40.938765  201972 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0216 17:27:41.195284  201972 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0216 17:27:41.195350  201972 cache_images.go:92] LoadImages completed in 933.663625ms
	W0216 17:27:41.195431  201972 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0216 17:27:41.195495  201972 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 17:27:41.250220  201972 cni.go:84] Creating CNI manager for ""
	I0216 17:27:41.250246  201972 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:27:41.250264  201972 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 17:27:41.250306  201972 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-283660 NodeName:kubernetes-upgrade-283660 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 17:27:41.250483  201972 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-283660"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-283660
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 17:27:41.250570  201972 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-283660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-283660 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 17:27:41.250637  201972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 17:27:41.259538  201972 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 17:27:41.259612  201972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 17:27:41.268453  201972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0216 17:27:41.288238  201972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 17:27:41.306375  201972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0216 17:27:41.323532  201972 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0216 17:27:41.327323  201972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:27:41.339686  201972 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660 for IP: 192.168.76.2
	I0216 17:27:41.339717  201972 certs.go:190] acquiring lock for shared ca certs: {Name:mkc4dfb4b2b1da0d6a80fb9567025307b764443b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:27:41.339844  201972 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key
	I0216 17:27:41.339893  201972 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key
	I0216 17:27:41.339954  201972 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/client.key
	I0216 17:27:41.339970  201972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/client.crt with IP's: []
	I0216 17:27:42.087565  201972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/client.crt ...
	I0216 17:27:42.087604  201972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/client.crt: {Name:mk880a630986a616d46260ae8791a66e34be785a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:27:42.087810  201972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/client.key ...
	I0216 17:27:42.087823  201972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/client.key: {Name:mk9a9e94e939ea8b444c4d8df6c5e7f3de6cc3d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:27:42.087901  201972 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.key.31bdca25
	I0216 17:27:42.087915  201972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 17:27:42.320208  201972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.crt.31bdca25 ...
	I0216 17:27:42.320240  201972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.crt.31bdca25: {Name:mkd922f8635c1ba09879c1a112b7e506bc35e7ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:27:42.320423  201972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.key.31bdca25 ...
	I0216 17:27:42.320438  201972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.key.31bdca25: {Name:mk5f3f1dafed895fff8e19176d268add8272104c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:27:42.320521  201972 certs.go:337] copying /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.crt
	I0216 17:27:42.320596  201972 certs.go:341] copying /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.key
	I0216 17:27:42.320675  201972 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/proxy-client.key
	I0216 17:27:42.320692  201972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/proxy-client.crt with IP's: []
	I0216 17:27:42.790983  201972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/proxy-client.crt ...
	I0216 17:27:42.791023  201972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/proxy-client.crt: {Name:mk50d391e11ce86ea8e55d57d4905ea15114f28a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:27:42.791205  201972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/proxy-client.key ...
	I0216 17:27:42.791221  201972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/proxy-client.key: {Name:mke797284f11385037b5bb2d6be5b34f58203bcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:27:42.791414  201972 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem (1338 bytes)
	W0216 17:27:42.791462  201972 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513_empty.pem, impossibly tiny 0 bytes
	I0216 17:27:42.791479  201972 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 17:27:42.791506  201972 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem (1078 bytes)
	I0216 17:27:42.791535  201972 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem (1123 bytes)
	I0216 17:27:42.791563  201972 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem (1675 bytes)
	I0216 17:27:42.791613  201972 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem (1708 bytes)
	I0216 17:27:42.792205  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 17:27:42.817244  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 17:27:42.841976  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 17:27:42.867163  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 17:27:42.891240  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 17:27:42.917028  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0216 17:27:42.943560  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 17:27:42.977508  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0216 17:27:43.002351  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 17:27:43.027377  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem --> /usr/share/ca-certificates/7513.pem (1338 bytes)
	I0216 17:27:43.052692  201972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /usr/share/ca-certificates/75132.pem (1708 bytes)
	I0216 17:27:43.077000  201972 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 17:27:43.095682  201972 ssh_runner.go:195] Run: openssl version
	I0216 17:27:43.101890  201972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 17:27:43.111160  201972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:27:43.114505  201972 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:27:43.114570  201972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:27:43.121397  201972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 17:27:43.131020  201972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7513.pem && ln -fs /usr/share/ca-certificates/7513.pem /etc/ssl/certs/7513.pem"
	I0216 17:27:43.140558  201972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7513.pem
	I0216 17:27:43.144037  201972 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:48 /usr/share/ca-certificates/7513.pem
	I0216 17:27:43.144139  201972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7513.pem
	I0216 17:27:43.151179  201972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7513.pem /etc/ssl/certs/51391683.0"
	I0216 17:27:43.160271  201972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75132.pem && ln -fs /usr/share/ca-certificates/75132.pem /etc/ssl/certs/75132.pem"
	I0216 17:27:43.169891  201972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75132.pem
	I0216 17:27:43.173393  201972 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:48 /usr/share/ca-certificates/75132.pem
	I0216 17:27:43.173456  201972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75132.pem
	I0216 17:27:43.180127  201972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75132.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 17:27:43.189738  201972 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 17:27:43.193086  201972 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 17:27:43.193145  201972 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-283660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-283660 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:27:43.193258  201972 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:27:43.209781  201972 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 17:27:43.218526  201972 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:27:43.227032  201972 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:27:43.227134  201972 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:27:43.235848  201972 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:27:43.235911  201972 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:27:43.290183  201972 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:27:43.290482  201972 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:27:43.489154  201972 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:27:43.489241  201972 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 17:27:43.489304  201972 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:27:43.489354  201972 kubeadm.go:322] OS: Linux
	I0216 17:27:43.489403  201972 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:27:43.489475  201972 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:27:43.489539  201972 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:27:43.489599  201972 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:27:43.489664  201972 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:27:43.489724  201972 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:27:43.572489  201972 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:27:43.572614  201972 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:27:43.572736  201972 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:27:43.748024  201972 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:27:43.749940  201972 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:27:43.758655  201972 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:27:43.856310  201972 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:27:43.859825  201972 out.go:204]   - Generating certificates and keys ...
	I0216 17:27:43.859959  201972 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:27:43.860047  201972 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:27:45.239506  201972 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 17:27:46.563389  201972 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 17:27:47.708951  201972 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 17:27:48.160743  201972 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 17:27:48.731042  201972 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 17:27:48.731394  201972 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-283660 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0216 17:27:49.317379  201972 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 17:27:49.317693  201972 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-283660 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0216 17:27:50.129554  201972 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 17:27:50.463918  201972 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 17:27:51.028171  201972 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 17:27:51.028554  201972 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:27:51.829963  201972 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:27:52.883479  201972 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:27:53.166099  201972 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:27:53.874974  201972 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:27:53.877052  201972 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:27:53.888629  201972 out.go:204]   - Booting up control plane ...
	I0216 17:27:53.888758  201972 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:27:53.901944  201972 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:27:53.904072  201972 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:27:53.905616  201972 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:27:53.908972  201972 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:28:33.909654  201972 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:31:53.911337  201972 kubeadm.go:322] 
	I0216 17:31:53.911412  201972 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:31:53.911464  201972 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:31:53.911474  201972 kubeadm.go:322] 
	I0216 17:31:53.911506  201972 kubeadm.go:322] This error is likely caused by:
	I0216 17:31:53.911541  201972 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:31:53.911647  201972 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:31:53.911658  201972 kubeadm.go:322] 
	I0216 17:31:53.911760  201972 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:31:53.911794  201972 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:31:53.911827  201972 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:31:53.911835  201972 kubeadm.go:322] 
	I0216 17:31:53.911931  201972 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:31:53.912022  201972 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:31:53.912102  201972 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:31:53.912150  201972 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:31:53.912224  201972 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:31:53.912258  201972 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:31:53.915868  201972 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:31:53.916008  201972 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:31:53.916211  201972 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 17:31:53.916312  201972 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:31:53.916397  201972 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:31:53.916463  201972 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 17:31:53.916580  201972 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-283660 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-283660 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-283660 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-283660 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 17:31:53.916656  201972 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:31:54.787252  201972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:31:54.799199  201972 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:31:54.799266  201972 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:31:54.808667  201972 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:31:54.808711  201972 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:31:54.880461  201972 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:31:54.880698  201972 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:31:55.085037  201972 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:31:55.085112  201972 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 17:31:55.085162  201972 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:31:55.085199  201972 kubeadm.go:322] OS: Linux
	I0216 17:31:55.085245  201972 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:31:55.085294  201972 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:31:55.085341  201972 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:31:55.085390  201972 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:31:55.085445  201972 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:31:55.085494  201972 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:31:55.197868  201972 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:31:55.198047  201972 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:31:55.198179  201972 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:31:55.388462  201972 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:31:55.390893  201972 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:31:55.399903  201972 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:31:55.487218  201972 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:31:55.491656  201972 out.go:204]   - Generating certificates and keys ...
	I0216 17:31:55.491839  201972 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:31:55.491947  201972 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:31:55.492078  201972 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:31:55.492156  201972 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:31:55.492658  201972 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:31:55.493334  201972 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:31:55.494056  201972 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:31:55.494820  201972 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:31:55.495447  201972 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:31:55.496189  201972 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:31:55.496564  201972 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:31:55.496865  201972 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:31:56.538081  201972 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:31:57.235018  201972 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:31:57.396372  201972 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:31:57.654162  201972 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:31:57.655283  201972 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:31:57.657286  201972 out.go:204]   - Booting up control plane ...
	I0216 17:31:57.657387  201972 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:31:57.663168  201972 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:31:57.664491  201972 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:31:57.665474  201972 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:31:57.670344  201972 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:32:37.671399  201972 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:35:57.671505  201972 kubeadm.go:322] 
	I0216 17:35:57.671660  201972 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:35:57.671716  201972 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:35:57.671722  201972 kubeadm.go:322] 
	I0216 17:35:57.671756  201972 kubeadm.go:322] This error is likely caused by:
	I0216 17:35:57.671786  201972 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:35:57.671901  201972 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:35:57.671909  201972 kubeadm.go:322] 
	I0216 17:35:57.672007  201972 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:35:57.672037  201972 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:35:57.672066  201972 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:35:57.672071  201972 kubeadm.go:322] 
	I0216 17:35:57.672167  201972 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:35:57.672254  201972 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:35:57.672329  201972 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:35:57.672373  201972 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:35:57.672443  201972 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:35:57.672473  201972 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:35:57.675058  201972 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:35:57.675202  201972 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:35:57.675407  201972 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 17:35:57.675513  201972 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:35:57.675595  201972 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:35:57.675662  201972 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:35:57.675757  201972 kubeadm.go:406] StartCluster complete in 8m14.482615622s
	I0216 17:35:57.675850  201972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:35:57.694481  201972 logs.go:276] 0 containers: []
	W0216 17:35:57.694504  201972 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:35:57.694562  201972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:35:57.716044  201972 logs.go:276] 0 containers: []
	W0216 17:35:57.716067  201972 logs.go:278] No container was found matching "etcd"
	I0216 17:35:57.716155  201972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:35:57.734189  201972 logs.go:276] 0 containers: []
	W0216 17:35:57.734212  201972 logs.go:278] No container was found matching "coredns"
	I0216 17:35:57.734269  201972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:35:57.751375  201972 logs.go:276] 0 containers: []
	W0216 17:35:57.751397  201972 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:35:57.751455  201972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:35:57.768819  201972 logs.go:276] 0 containers: []
	W0216 17:35:57.768841  201972 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:35:57.768902  201972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:35:57.786675  201972 logs.go:276] 0 containers: []
	W0216 17:35:57.786695  201972 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:35:57.786759  201972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:35:57.805282  201972 logs.go:276] 0 containers: []
	W0216 17:35:57.805304  201972 logs.go:278] No container was found matching "kindnet"
	I0216 17:35:57.805314  201972 logs.go:123] Gathering logs for kubelet ...
	I0216 17:35:57.805338  201972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:35:57.828059  201972 logs.go:138] Found kubelet problem: Feb 16 17:35:35 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:35.022350    5371 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-283660_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:35:57.828880  201972 logs.go:138] Found kubelet problem: Feb 16 17:35:35 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:35.025677    5371 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-283660_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:35:57.838511  201972 logs.go:138] Found kubelet problem: Feb 16 17:35:38 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:38.996891    5371 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-kubernetes-upgrade-283660_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:35:57.850927  201972 logs.go:138] Found kubelet problem: Feb 16 17:35:42 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:42.996060    5371 pod_workers.go:191] Error syncing pod f8cf329aaa5ca438da97374f1431f778 ("etcd-kubernetes-upgrade-283660_kube-system(f8cf329aaa5ca438da97374f1431f778)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:35:57.863032  201972 logs.go:138] Found kubelet problem: Feb 16 17:35:48 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:48.059411    5371 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-283660_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:35:57.863791  201972 logs.go:138] Found kubelet problem: Feb 16 17:35:48 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:48.071524    5371 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-283660_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:35:57.877152  201972 logs.go:138] Found kubelet problem: Feb 16 17:35:53 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:53.996850    5371 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-kubernetes-upgrade-283660_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:35:57.882492  201972 logs.go:138] Found kubelet problem: Feb 16 17:35:55 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:55.997090    5371 pod_workers.go:191] Error syncing pod f8cf329aaa5ca438da97374f1431f778 ("etcd-kubernetes-upgrade-283660_kube-system(f8cf329aaa5ca438da97374f1431f778)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:35:57.886435  201972 logs.go:123] Gathering logs for dmesg ...
	I0216 17:35:57.886456  201972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:35:57.906782  201972 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:35:57.906812  201972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:35:58.207698  201972 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:35:58.207727  201972 logs.go:123] Gathering logs for Docker ...
	I0216 17:35:58.207743  201972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:35:58.228443  201972 logs.go:123] Gathering logs for container status ...
	I0216 17:35:58.228479  201972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0216 17:35:58.278943  201972 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 17:35:58.278984  201972 out.go:239] * 
	* 
	W0216 17:35:58.279035  201972 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:35:58.279064  201972 out.go:239] * 
	* 
	W0216 17:35:58.279989  201972 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 17:35:58.282512  201972 out.go:177] X Problems detected in kubelet:
	I0216 17:35:58.284952  201972 out.go:177]   Feb 16 17:35:35 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:35.022350    5371 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-283660_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:35:58.287380  201972 out.go:177]   Feb 16 17:35:35 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:35.025677    5371 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-283660_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:35:58.289302  201972 out.go:177]   Feb 16 17:35:38 kubernetes-upgrade-283660 kubelet[5371]: E0216 17:35:38.996891    5371 pod_workers.go:191] Error syncing pod 002009a6866b0a2506f8d5c8c4da7548 ("kube-apiserver-kubernetes-upgrade-283660_kube-system(002009a6866b0a2506f8d5c8c4da7548)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:35:58.292770  201972 out.go:177] 
	W0216 17:35:58.294798  201972 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:35:58.294854  201972 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 17:35:58.294878  201972 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 17:35:58.297079  201972 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-arm64 start -p kubernetes-upgrade-283660 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-283660
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-283660: (1.262016543s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-283660 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-283660 status --format={{.Host}}: exit status 7 (84.413235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-283660 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-283660 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.289157346s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-283660 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-283660 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-283660 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (99.651802ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-283660] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-283660
	    minikube start -p kubernetes-upgrade-283660 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2836602 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-283660 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-283660 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-283660 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.088589936s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-16 17:37:11.306380652 +0000 UTC m=+3341.489792863
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-283660
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-283660:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c28fa05553e88ad9c6a020d77f44f415d931410bad74c8489ab807aa7f4da50c",
	        "Created": "2024-02-16T17:27:25.516817952Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:36:00.403958819Z",
	            "FinishedAt": "2024-02-16T17:35:58.906680757Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/c28fa05553e88ad9c6a020d77f44f415d931410bad74c8489ab807aa7f4da50c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c28fa05553e88ad9c6a020d77f44f415d931410bad74c8489ab807aa7f4da50c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c28fa05553e88ad9c6a020d77f44f415d931410bad74c8489ab807aa7f4da50c/hosts",
	        "LogPath": "/var/lib/docker/containers/c28fa05553e88ad9c6a020d77f44f415d931410bad74c8489ab807aa7f4da50c/c28fa05553e88ad9c6a020d77f44f415d931410bad74c8489ab807aa7f4da50c-json.log",
	        "Name": "/kubernetes-upgrade-283660",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-283660:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-283660",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/18388ab2dd55dec6116948d68a4bdcf5871d400b9044b4ca59177eb538f17d11-init/diff:/var/lib/docker/overlay2/946a7b4f2791bd4745aa26fd1fdd5eefb03c154f3c1fd517458d1937bbb85039/diff",
	                "MergedDir": "/var/lib/docker/overlay2/18388ab2dd55dec6116948d68a4bdcf5871d400b9044b4ca59177eb538f17d11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/18388ab2dd55dec6116948d68a4bdcf5871d400b9044b4ca59177eb538f17d11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/18388ab2dd55dec6116948d68a4bdcf5871d400b9044b4ca59177eb538f17d11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-283660",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-283660/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-283660",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-283660",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-283660",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f0d38fa0a5f4edb3ad70db8b9b0b00274d185450dbdef6468a220da7e8c93609",
	            "SandboxKey": "/var/run/docker/netns/f0d38fa0a5f4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33012"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33011"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33008"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33010"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33009"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-283660": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c28fa05553e8",
	                        "kubernetes-upgrade-283660"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "cb5aba07f33d38dc01339411b2787a4defb267082c382abe77c5817d8cc46c58",
	                    "EndpointID": "67a537d886653decd9be15056b050fdc5baf619adf8d83c9c3c2ac9eb191536c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "kubernetes-upgrade-283660",
	                        "c28fa05553e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-283660 -n kubernetes-upgrade-283660
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-283660 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-283660 logs -n 25: (1.503053552s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-850655 sudo systemctl                        | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | status kubelet --all --full                          |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo systemctl                        | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | cat kubelet --no-pager                               |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo journalctl                       | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | -xeu kubelet --all --full                            |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo cat                              | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo cat                              | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo systemctl                        | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | status docker --all --full                           |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo systemctl                        | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo cat                              | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo docker                           | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo systemctl                        | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo systemctl                        | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:36 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo cat                              | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:36 UTC | 16 Feb 24 17:37 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo cat                              | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo                                  | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo systemctl                        | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo systemctl                        | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo cat                              | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo cat                              | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo containerd                       | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo systemctl                        | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC |                     |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo systemctl                        | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo find                             | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-850655 sudo crio                             | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p auto-850655                                       | auto-850655    | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC | 16 Feb 24 17:37 UTC |
	| start   | -p kindnet-850655                                    | kindnet-850655 | jenkins | v1.32.0 | 16 Feb 24 17:37 UTC |                     |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker                        |                |         |         |                     |                     |
	|         | --container-runtime=docker                           |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 17:37:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 17:37:08.472267  247887 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:37:08.472408  247887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:37:08.472419  247887 out.go:304] Setting ErrFile to fd 2...
	I0216 17:37:08.472425  247887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:37:08.472702  247887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 17:37:08.473118  247887 out.go:298] Setting JSON to false
	I0216 17:37:08.474098  247887 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4778,"bootTime":1708100250,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 17:37:08.474169  247887 start.go:139] virtualization:  
	I0216 17:37:08.478898  247887 out.go:177] * [kindnet-850655] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 17:37:08.481703  247887 notify.go:220] Checking for updates...
	I0216 17:37:08.484592  247887 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 17:37:08.486537  247887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 17:37:08.489921  247887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 17:37:08.492187  247887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 17:37:08.494530  247887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0216 17:37:08.496834  247887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 17:37:08.500609  247887 config.go:182] Loaded profile config "kubernetes-upgrade-283660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 17:37:08.500777  247887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 17:37:08.524505  247887 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 17:37:08.524615  247887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:37:08.593985  247887 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 17:37:08.583276338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 17:37:08.594105  247887 docker.go:295] overlay module found
	I0216 17:37:08.598038  247887 out.go:177] * Using the docker driver based on user configuration
	I0216 17:37:08.599887  247887 start.go:299] selected driver: docker
	I0216 17:37:08.599910  247887 start.go:903] validating driver "docker" against <nil>
	I0216 17:37:08.599925  247887 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 17:37:08.600571  247887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:37:08.660880  247887 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 17:37:08.65225724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 17:37:08.661043  247887 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 17:37:08.661272  247887 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 17:37:08.663504  247887 out.go:177] * Using Docker driver with root privileges
	I0216 17:37:08.665490  247887 cni.go:84] Creating CNI manager for "kindnet"
	I0216 17:37:08.665518  247887 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0216 17:37:08.665535  247887 start_flags.go:323] config:
	{Name:kindnet-850655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-850655 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:37:08.667978  247887 out.go:177] * Starting control plane node kindnet-850655 in cluster kindnet-850655
	I0216 17:37:08.670235  247887 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 17:37:08.672132  247887 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 17:37:08.674059  247887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 17:37:08.674126  247887 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0216 17:37:08.674139  247887 cache.go:56] Caching tarball of preloaded images
	I0216 17:37:08.674143  247887 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 17:37:08.674248  247887 preload.go:174] Found /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0216 17:37:08.674262  247887 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0216 17:37:08.674362  247887 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/config.json ...
	I0216 17:37:08.674380  247887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/config.json: {Name:mkc0956ca73ddcbb21f0c0e288230d67a4c6f128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:37:08.690107  247887 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 17:37:08.690132  247887 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 17:37:08.690151  247887 cache.go:194] Successfully downloaded all kic artifacts
	I0216 17:37:08.690178  247887 start.go:365] acquiring machines lock for kindnet-850655: {Name:mkb30489b6c26b46e338e34f5e53d89aa37ab78b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 17:37:08.690306  247887 start.go:369] acquired machines lock for "kindnet-850655" in 102.508µs
	I0216 17:37:08.690338  247887 start.go:93] Provisioning new machine with config: &{Name:kindnet-850655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-850655 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 17:37:08.690428  247887 start.go:125] createHost starting for "" (driver="docker")
	I0216 17:37:06.329021  242011 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0216 17:37:06.329061  242011 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 17:37:07.323856  242011 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 17:37:07.323894  242011 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 17:37:07.323907  242011 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 17:37:07.641170  242011 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 17:37:07.641203  242011 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 17:37:07.641216  242011 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 17:37:07.727015  242011 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 17:37:07.727050  242011 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 17:37:07.828319  242011 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 17:37:07.872339  242011 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 17:37:07.872376  242011 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:37:08.328515  242011 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 17:37:08.341478  242011 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 17:37:08.341507  242011 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 17:37:08.828850  242011 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 17:37:08.838076  242011 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0216 17:37:08.857460  242011 api_server.go:141] control plane version: v1.29.0-rc.2
	I0216 17:37:08.857487  242011 api_server.go:131] duration metric: took 7.529601709s to wait for apiserver health ...
	I0216 17:37:08.857497  242011 cni.go:84] Creating CNI manager for ""
	I0216 17:37:08.857510  242011 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 17:37:08.860416  242011 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 17:37:08.862320  242011 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 17:37:08.874205  242011 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 17:37:08.897804  242011 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 17:37:08.913911  242011 system_pods.go:59] 5 kube-system pods found
	I0216 17:37:08.913944  242011 system_pods.go:61] "etcd-kubernetes-upgrade-283660" [fc68188a-6628-4823-9d4d-86a7569c90da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 17:37:08.913957  242011 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-283660" [4250a20c-18a2-45de-8621-9e01a9d94fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 17:37:08.913964  242011 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-283660" [afc3e137-dbfc-4cca-bb32-0513408f8fc4] Pending
	I0216 17:37:08.913974  242011 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-283660" [421c102d-fba7-435b-9f87-b962142b964f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 17:37:08.913993  242011 system_pods.go:61] "storage-provisioner" [bceb4d8b-d739-4419-a02e-350f77a4a1fa] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0216 17:37:08.914000  242011 system_pods.go:74] duration metric: took 16.174874ms to wait for pod list to return data ...
	I0216 17:37:08.914007  242011 node_conditions.go:102] verifying NodePressure condition ...
	I0216 17:37:08.917622  242011 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0216 17:37:08.917656  242011 node_conditions.go:123] node cpu capacity is 2
	I0216 17:37:08.917667  242011 node_conditions.go:105] duration metric: took 3.654968ms to run NodePressure ...
	I0216 17:37:08.917684  242011 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:37:09.256523  242011 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 17:37:09.264753  242011 ops.go:34] apiserver oom_adj: -16
	I0216 17:37:09.264771  242011 kubeadm.go:640] restartCluster took 23.918673813s
	I0216 17:37:09.264780  242011 kubeadm.go:406] StartCluster complete in 23.950361945s
	I0216 17:37:09.264796  242011 settings.go:142] acquiring lock: {Name:mkb7d1073df18b92aae32c7933eb8e8868b57c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:37:09.264855  242011 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 17:37:09.265527  242011 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/kubeconfig: {Name:mk22ab392afde309b066ab7073c4430ce25196e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:37:09.266178  242011 kapi.go:59] client config for kubernetes-upgrade-283660: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/client.crt", KeyFile:"/home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/client.key", CAFile:"/home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c81d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0216 17:37:09.266676  242011 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 17:37:09.266896  242011 config.go:182] Loaded profile config "kubernetes-upgrade-283660": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 17:37:09.266930  242011 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0216 17:37:09.266983  242011 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-283660"
	I0216 17:37:09.266996  242011 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-283660"
	W0216 17:37:09.267002  242011 addons.go:243] addon storage-provisioner should already be in state true
	I0216 17:37:09.267043  242011 host.go:66] Checking if "kubernetes-upgrade-283660" exists ...
	I0216 17:37:09.267431  242011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-283660 --format={{.State.Status}}
	I0216 17:37:09.267626  242011 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-283660"
	I0216 17:37:09.267644  242011 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-283660"
	I0216 17:37:09.267886  242011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-283660 --format={{.State.Status}}
	I0216 17:37:09.274068  242011 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-283660" context rescaled to 1 replicas
	I0216 17:37:09.274104  242011 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 17:37:09.281884  242011 out.go:177] * Verifying Kubernetes components...
	I0216 17:37:09.289099  242011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:37:09.372205  242011 kapi.go:59] client config for kubernetes-upgrade-283660: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/client.crt", KeyFile:"/home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubernetes-upgrade-283660/client.key", CAFile:"/home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c81d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0216 17:37:09.372472  242011 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-283660"
	W0216 17:37:09.372482  242011 addons.go:243] addon default-storageclass should already be in state true
	I0216 17:37:09.372506  242011 host.go:66] Checking if "kubernetes-upgrade-283660" exists ...
	I0216 17:37:09.373053  242011 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-283660 --format={{.State.Status}}
	I0216 17:37:09.375643  242011 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:37:09.378206  242011 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 17:37:09.378229  242011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 17:37:09.378299  242011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:37:09.439752  242011 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 17:37:09.439773  242011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 17:37:09.439837  242011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-283660
	I0216 17:37:09.442066  242011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/kubernetes-upgrade-283660/id_rsa Username:docker}
	I0216 17:37:09.469800  242011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33012 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/kubernetes-upgrade-283660/id_rsa Username:docker}
	I0216 17:37:09.570107  242011 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0216 17:37:09.570227  242011 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:37:09.570352  242011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:37:09.603137  242011 api_server.go:72] duration metric: took 328.992883ms to wait for apiserver process to appear ...
	I0216 17:37:09.603212  242011 api_server.go:88] waiting for apiserver healthz status ...
	I0216 17:37:09.603252  242011 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 17:37:09.625703  242011 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0216 17:37:09.629717  242011 api_server.go:141] control plane version: v1.29.0-rc.2
	I0216 17:37:09.629779  242011 api_server.go:131] duration metric: took 26.545476ms to wait for apiserver health ...
	I0216 17:37:09.629791  242011 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 17:37:09.636132  242011 system_pods.go:59] 5 kube-system pods found
	I0216 17:37:09.636203  242011 system_pods.go:61] "etcd-kubernetes-upgrade-283660" [fc68188a-6628-4823-9d4d-86a7569c90da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 17:37:09.636229  242011 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-283660" [4250a20c-18a2-45de-8621-9e01a9d94fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 17:37:09.636258  242011 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-283660" [afc3e137-dbfc-4cca-bb32-0513408f8fc4] Pending
	I0216 17:37:09.636282  242011 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-283660" [421c102d-fba7-435b-9f87-b962142b964f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 17:37:09.636305  242011 system_pods.go:61] "storage-provisioner" [bceb4d8b-d739-4419-a02e-350f77a4a1fa] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0216 17:37:09.636334  242011 system_pods.go:74] duration metric: took 6.534889ms to wait for pod list to return data ...
	I0216 17:37:09.636358  242011 kubeadm.go:581] duration metric: took 362.231971ms to wait for : map[apiserver:true system_pods:true] ...
	I0216 17:37:09.636386  242011 node_conditions.go:102] verifying NodePressure condition ...
	I0216 17:37:09.655312  242011 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0216 17:37:09.655385  242011 node_conditions.go:123] node cpu capacity is 2
	I0216 17:37:09.655410  242011 node_conditions.go:105] duration metric: took 18.996399ms to run NodePressure ...
	I0216 17:37:09.655436  242011 start.go:228] waiting for startup goroutines ...
	I0216 17:37:09.658612  242011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 17:37:09.677913  242011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 17:37:11.186605  242011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.52790941s)
	I0216 17:37:11.186655  242011 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.508681912s)
	I0216 17:37:11.198978  242011 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0216 17:37:11.201158  242011 addons.go:505] enable addons completed in 1.934224346s: enabled=[storage-provisioner default-storageclass]
	I0216 17:37:11.201232  242011 start.go:233] waiting for cluster config update ...
	I0216 17:37:11.201256  242011 start.go:242] writing updated cluster config ...
	I0216 17:37:11.201553  242011 ssh_runner.go:195] Run: rm -f paused
	I0216 17:37:11.279044  242011 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0216 17:37:11.281559  242011 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-283660" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 16 17:36:44 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:44Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 16 17:36:44 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:44Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 16 17:36:44 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:44Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 16 17:36:44 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:44Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 16 17:36:44 kubernetes-upgrade-283660 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 16 17:36:47 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1aa796e1b61e3516f536e29abf3afff724a8b6244e69d293c4d63e33f82e1728/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:36:47 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/207a1cc6564d71e4ef9baf8835fadf8c17cd974fdb705110a1233bf23888d572/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:36:47 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d9d860ad31cccf3ee079bb9bdd0782e5edc77e8ac78d082556a85dc726fb6e55/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:36:47 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7eb329e720f0f9c7431c9cf98c17ad0236481449675d646838f16a377524c7b/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:36:54 kubernetes-upgrade-283660 dockerd[3428]: time="2024-02-16T17:36:54.948284151Z" level=info msg="ignoring event" container=d7eb329e720f0f9c7431c9cf98c17ad0236481449675d646838f16a377524c7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:36:55 kubernetes-upgrade-283660 dockerd[3428]: time="2024-02-16T17:36:55.255470927Z" level=info msg="ignoring event" container=4647679498bf6a7f01f65106d9feab959de03231912ff68bcc263e9fa789799d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:36:55 kubernetes-upgrade-283660 dockerd[3428]: time="2024-02-16T17:36:55.335651791Z" level=info msg="ignoring event" container=207a1cc6564d71e4ef9baf8835fadf8c17cd974fdb705110a1233bf23888d572 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:36:55 kubernetes-upgrade-283660 dockerd[3428]: time="2024-02-16T17:36:55.360790428Z" level=info msg="ignoring event" container=4aa85bc506c51c35063309279a1cfc10efd6864b541aa5ed4985bb91cec8c4b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:36:55 kubernetes-upgrade-283660 dockerd[3428]: time="2024-02-16T17:36:55.388004786Z" level=info msg="ignoring event" container=d9d860ad31cccf3ee079bb9bdd0782e5edc77e8ac78d082556a85dc726fb6e55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:36:55 kubernetes-upgrade-283660 dockerd[3428]: time="2024-02-16T17:36:55.424603422Z" level=info msg="ignoring event" container=1aa796e1b61e3516f536e29abf3afff724a8b6244e69d293c4d63e33f82e1728 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:36:55 kubernetes-upgrade-283660 dockerd[3428]: time="2024-02-16T17:36:55.456264140Z" level=info msg="ignoring event" container=7eca4cb3dbdb19109d2ba26c8db975593b8f21e6ec4db4a329e6b0e1d76b71a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:36:55 kubernetes-upgrade-283660 dockerd[3428]: time="2024-02-16T17:36:55.965951634Z" level=info msg="ignoring event" container=5ba89862b861e4bee99ff13ab0b6944dae7495dd5eed9f0dd93f9d7ffacd4646 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:36:56 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7ed7098b96c6c9ffa46d4101c2b97125ce32d323deea048488555885729859e2/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:36:56 kubernetes-upgrade-283660 cri-dockerd[3651]: W0216 17:36:56.265766    3651 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 16 17:36:56 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9440f9a01f6361a066e8e32616c0774481fec0959ca491d6ac1ff9190974042/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:36:56 kubernetes-upgrade-283660 cri-dockerd[3651]: W0216 17:36:56.419382    3651 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 16 17:36:56 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7d773bc958b69dec28a724d1687f5614d79ec017bb003c2a2f10276133a59ca7/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Feb 16 17:36:56 kubernetes-upgrade-283660 cri-dockerd[3651]: W0216 17:36:56.521327    3651 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 16 17:36:56 kubernetes-upgrade-283660 cri-dockerd[3651]: time="2024-02-16T17:36:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f6af8c45fbbac149d84c627102379e470d55c4f47d7bf00d7915d427496cf00f/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options ndots:0 edns0 trust-ad]"
	Feb 16 17:36:56 kubernetes-upgrade-283660 cri-dockerd[3651]: W0216 17:36:56.528367    3651 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bf462f7a329cd       488ec30dc9be3       12 seconds ago      Running             kube-scheduler            2                   7d773bc958b69       kube-scheduler-kubernetes-upgrade-283660
	e5cd104266035       be43264efd65f       12 seconds ago      Running             kube-controller-manager   2                   7ed7098b96c6c       kube-controller-manager-kubernetes-upgrade-283660
	02e3379d35572       0dd9b8246cda6       12 seconds ago      Running             kube-apiserver            2                   f6af8c45fbbac       kube-apiserver-kubernetes-upgrade-283660
	07f2bda1ef2a5       79f8d13ae8b88       12 seconds ago      Running             etcd                      2                   b9440f9a01f63       etcd-kubernetes-upgrade-283660
	7eca4cb3dbdb1       488ec30dc9be3       25 seconds ago      Exited              kube-scheduler            1                   d7eb329e720f0       kube-scheduler-kubernetes-upgrade-283660
	5ba89862b861e       0dd9b8246cda6       25 seconds ago      Exited              kube-apiserver            1                   d9d860ad31ccc       kube-apiserver-kubernetes-upgrade-283660
	4aa85bc506c51       79f8d13ae8b88       25 seconds ago      Exited              etcd                      1                   207a1cc6564d7       etcd-kubernetes-upgrade-283660
	4647679498bf6       be43264efd65f       25 seconds ago      Exited              kube-controller-manager   1                   1aa796e1b61e3       kube-controller-manager-kubernetes-upgrade-283660
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-283660
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=kubernetes-upgrade-283660
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Feb 2024 17:36:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-283660
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Feb 2024 17:37:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Feb 2024 17:37:07 +0000   Fri, 16 Feb 2024 17:36:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Feb 2024 17:37:07 +0000   Fri, 16 Feb 2024 17:36:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Feb 2024 17:37:07 +0000   Fri, 16 Feb 2024 17:36:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Feb 2024 17:37:07 +0000   Fri, 16 Feb 2024 17:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-283660
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 502380d4c5ad4d4eb034fef1980e7136
	  System UUID:                08ba71e0-5bb5-4691-aa0a-2ba4fe9622fb
	  Boot ID:                    28e061af-c4a8-40ad-8619-080c07806076
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-283660                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         42s
	  kube-system                 kube-apiserver-kubernetes-upgrade-283660             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-283660    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 kube-scheduler-kubernetes-upgrade-283660             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet  Node kubernetes-upgrade-283660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet  Node kubernetes-upgrade-283660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x7 over 53s)  kubelet  Node kubernetes-upgrade-283660 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  53s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 13s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 13s)  kubelet  Node kubernetes-upgrade-283660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 13s)  kubelet  Node kubernetes-upgrade-283660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x7 over 13s)  kubelet  Node kubernetes-upgrade-283660 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000736] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=00000000ea93a584{9p.inode} n=00000000ddff12b8
	[  +0.001060] FS-Cache: O-key=[8] '0461f10000000000'
	[  +0.000753] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000ea93a584{9p.inode} n=0000000015591770
	[  +0.001047] FS-Cache: N-key=[8] '0461f10000000000'
	[Feb16 16:51] FS-Cache: Duplicate cookie detected
	[  +0.000747] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000957] FS-Cache: O-cookie d=00000000ea93a584{9p.inode} n=000000006efb19ee
	[  +0.001084] FS-Cache: O-key=[8] '0361f10000000000'
	[  +0.000809] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001034] FS-Cache: N-cookie d=00000000ea93a584{9p.inode} n=00000000b472c289
	[  +0.001072] FS-Cache: N-key=[8] '0361f10000000000'
	[  +0.382339] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001082] FS-Cache: O-cookie d=00000000ea93a584{9p.inode} n=00000000f3dd8454
	[  +0.001083] FS-Cache: O-key=[8] '0661f10000000000'
	[  +0.000812] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=00000000ea93a584{9p.inode} n=0000000032d8be23
	[  +0.001050] FS-Cache: N-key=[8] '0661f10000000000'
	[Feb16 16:53] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb16 17:33] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.010301] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.007673] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.156648] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [07f2bda1ef2a] <==
	{"level":"info","ts":"2024-02-16T17:37:00.66327Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-16T17:37:00.663279Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-16T17:37:00.663478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-02-16T17:37:00.663521Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-02-16T17:37:00.663602Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T17:37:00.663628Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-16T17:37:00.668956Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-16T17:37:00.669019Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-16T17:37:00.669028Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-16T17:37:00.670188Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-16T17:37:00.670217Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-16T17:37:01.856783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-16T17:37:01.856897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-16T17:37:01.856984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-16T17:37:01.857032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2024-02-16T17:37:01.857066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-16T17:37:01.857098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2024-02-16T17:37:01.857126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2024-02-16T17:37:01.863726Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-283660 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-16T17:37:01.864028Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:37:01.864919Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:37:01.873759Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-16T17:37:01.889767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-16T17:37:01.910612Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-16T17:37:01.93541Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [4aa85bc506c5] <==
	{"level":"info","ts":"2024-02-16T17:36:47.461314Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-16T17:36:48.541265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-16T17:36:48.54138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-16T17:36:48.541402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-02-16T17:36:48.541416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-02-16T17:36:48.541423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-16T17:36:48.541477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-02-16T17:36:48.541487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-02-16T17:36:48.545529Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-283660 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-16T17:36:48.545567Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:36:48.545601Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-16T17:36:48.553819Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-16T17:36:48.553903Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-16T17:36:48.572149Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-16T17:36:48.624869Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-02-16T17:36:55.060759Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-16T17:36:55.060829Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-283660","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2024-02-16T17:36:55.060922Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-16T17:36:55.061013Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-16T17:36:55.220618Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-16T17:36:55.220737Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-16T17:36:55.220775Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2024-02-16T17:36:55.224884Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-16T17:36:55.225133Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-02-16T17:36:55.225177Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-283660","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 17:37:12 up  1:19,  0 users,  load average: 4.70, 3.30, 2.63
	Linux kubernetes-upgrade-283660 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [02e3379d3557] <==
	I0216 17:37:07.064870       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0216 17:37:07.172791       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0216 17:37:07.172824       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0216 17:37:07.704876       1 shared_informer.go:318] Caches are synced for configmaps
	I0216 17:37:07.721735       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0216 17:37:07.726377       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0216 17:37:07.754878       1 aggregator.go:165] initial CRD sync complete...
	I0216 17:37:07.761171       1 autoregister_controller.go:141] Starting autoregister controller
	I0216 17:37:07.761330       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0216 17:37:07.761415       1 cache.go:39] Caches are synced for autoregister controller
	I0216 17:37:07.771123       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0216 17:37:07.803983       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0216 17:37:07.728541       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0216 17:37:07.809018       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0216 17:37:07.809029       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0216 17:37:07.860856       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0216 17:37:08.074407       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0216 17:37:08.486563       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0216 17:37:08.488448       1 controller.go:624] quota admission added evaluator for: endpoints
	I0216 17:37:08.494970       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0216 17:37:09.069381       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0216 17:37:09.096589       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0216 17:37:09.163616       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0216 17:37:09.217685       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0216 17:37:09.236954       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [5ba89862b861] <==
	W0216 17:36:55.155885       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.155974       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156061       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156131       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156204       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156269       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156332       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156381       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156439       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156500       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156547       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156610       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156674       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156733       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156784       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156851       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156914       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.156977       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.157026       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.157732       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.157876       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.160710       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.160953       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.161013       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0216 17:36:55.161074       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4647679498bf] <==
	I0216 17:36:51.203300       1 serving.go:380] Generated self-signed cert in-memory
	I0216 17:36:53.907296       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0216 17:36:53.907332       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 17:36:53.914163       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0216 17:36:53.914509       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0216 17:36:53.919511       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0216 17:36:53.920191       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [e5cd10426603] <==
	E0216 17:37:09.687957       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0216 17:37:09.687989       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0216 17:37:09.688005       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0216 17:37:09.838260       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0216 17:37:09.838326       1 gc_controller.go:101] "Starting GC controller"
	I0216 17:37:09.838334       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0216 17:37:09.985878       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0216 17:37:09.986038       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0216 17:37:09.986046       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0216 17:37:10.137268       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0216 17:37:10.137397       1 stateful_set.go:161] "Starting stateful set controller"
	I0216 17:37:10.137406       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0216 17:37:10.288353       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0216 17:37:10.288498       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0216 17:37:10.288506       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0216 17:37:10.288513       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0216 17:37:10.438501       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0216 17:37:10.438573       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0216 17:37:10.438581       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0216 17:37:10.586398       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0216 17:37:10.587261       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0216 17:37:10.587290       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0216 17:37:10.736004       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0216 17:37:10.736179       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0216 17:37:10.736189       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	
	
	==> kube-scheduler [7eca4cb3dbdb] <==
	I0216 17:36:52.153812       1 serving.go:380] Generated self-signed cert in-memory
	W0216 17:36:55.400439       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.76.2:8443: connect: connection refused
	W0216 17:36:55.400473       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0216 17:36:55.400481       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0216 17:36:55.405053       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0216 17:36:55.405076       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 17:36:55.406841       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0216 17:36:55.406914       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0216 17:36:55.406952       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bf462f7a329c] <==
	I0216 17:37:05.180162       1 serving.go:380] Generated self-signed cert in-memory
	I0216 17:37:08.224878       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0216 17:37:08.226930       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0216 17:37:08.242409       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0216 17:37:08.242502       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0216 17:37:08.242522       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0216 17:37:08.242548       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0216 17:37:08.244170       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0216 17:37:08.244200       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0216 17:37:08.244220       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0216 17:37:08.244226       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0216 17:37:08.345477       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0216 17:37:08.345598       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0216 17:37:08.346063       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.200197    4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7eb329e720f0f9c7431c9cf98c17ad0236481449675d646838f16a377524c7b"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.200209    4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39735fe5749ed9e21680110c66e97ef91dff639e19f6176f8cb43d3fe5d2b2dc"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.200228    4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aa796e1b61e3516f536e29abf3afff724a8b6244e69d293c4d63e33f82e1728"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.200240    4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daaf55b2f32c5c7185b083c74c17e7624864e97e3cdb020a9ad2539a9bc70d40"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.200260    4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d96e91587618500647429a8fa7e8521dd7cb2f84fa1a3b33117736daa6978ea9"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.200275    4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9d860ad31cccf3ee079bb9bdd0782e5edc77e8ac78d082556a85dc726fb6e55"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.200287    4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d071cfeb1796ab2cd6e556f8a3190f3d7b37a9f653fb698a7b9cc1d6db7ec3"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.287280    4787 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bac9364f81c6d8c08991c96a3709d1bf-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-283660\" (UID: \"bac9364f81c6d8c08991c96a3709d1bf\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-283660"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: E0216 17:37:00.398385    4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-283660?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="800ms"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.459710    4787 scope.go:117] "RemoveContainer" containerID="4aa85bc506c51c35063309279a1cfc10efd6864b541aa5ed4985bb91cec8c4b8"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.496222    4787 scope.go:117] "RemoveContainer" containerID="5ba89862b861e4bee99ff13ab0b6944dae7495dd5eed9f0dd93f9d7ffacd4646"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.510736    4787 scope.go:117] "RemoveContainer" containerID="4647679498bf6a7f01f65106d9feab959de03231912ff68bcc263e9fa789799d"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.513032    4787 scope.go:117] "RemoveContainer" containerID="7eca4cb3dbdb19109d2ba26c8db975593b8f21e6ec4db4a329e6b0e1d76b71a8"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:00.613905    4787 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-283660"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: E0216 17:37:00.614251    4787 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-283660"
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: W0216 17:37:00.614332    4787 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-283660&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: E0216 17:37:00.614394    4787 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-283660&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: W0216 17:37:00.958528    4787 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:37:00 kubernetes-upgrade-283660 kubelet[4787]: E0216 17:37:00.958618    4787 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 16 17:37:01 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:01.423605    4787 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-283660"
	Feb 16 17:37:07 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:07.779344    4787 apiserver.go:52] "Watching apiserver"
	Feb 16 17:37:07 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:07.868320    4787 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-283660"
	Feb 16 17:37:07 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:07.868451    4787 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-283660"
	Feb 16 17:37:07 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:07.881748    4787 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 16 17:37:09 kubernetes-upgrade-283660 kubelet[4787]: I0216 17:37:09.902397    4787 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-283660" podStartSLOduration=16.902323741 podStartE2EDuration="16.902323741s" podCreationTimestamp="2024-02-16 17:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-16 17:37:09.901923483 +0000 UTC m=+10.321562662" watchObservedRunningTime="2024-02-16 17:37:09.902323741 +0000 UTC m=+10.321962912"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-283660 -n kubernetes-upgrade-283660
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-283660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-283660 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-283660 describe pod storage-provisioner: exit status 1 (86.101377ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-283660 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-283660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-283660
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-283660: (2.570163031s)
--- FAIL: TestKubernetesUpgrade (598.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (517s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-488384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0216 17:43:47.461672    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:57.702116    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:44:02.148794    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:44:15.158307    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
E0216 17:44:18.182956    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:44:43.109166    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:44:50.355172    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-488384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: exit status 109 (8m36.557138684s)

                                                
                                                
-- stdout --
	* [old-k8s-version-488384] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node old-k8s-version-488384 in cluster old-k8s-version-488384
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 16 17:52:01 old-k8s-version-488384 kubelet[5456]: E0216 17:52:01.519546    5456 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:52:06 old-k8s-version-488384 kubelet[5456]: E0216 17:52:06.513912    5456 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:52:07 old-k8s-version-488384 kubelet[5456]: E0216 17:52:07.514080    5456 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:43:45.917338  305464 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:43:45.917600  305464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:43:45.917626  305464 out.go:304] Setting ErrFile to fd 2...
	I0216 17:43:45.917646  305464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:43:45.917936  305464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 17:43:45.918419  305464 out.go:298] Setting JSON to false
	I0216 17:43:45.919431  305464 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5176,"bootTime":1708100250,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 17:43:45.919522  305464 start.go:139] virtualization:  
	I0216 17:43:45.922579  305464 out.go:177] * [old-k8s-version-488384] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 17:43:45.924790  305464 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 17:43:45.926775  305464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 17:43:45.924840  305464 notify.go:220] Checking for updates...
	I0216 17:43:45.928743  305464 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 17:43:45.930916  305464 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 17:43:45.932780  305464 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0216 17:43:45.934565  305464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 17:43:45.936945  305464 config.go:182] Loaded profile config "kubenet-850655": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:43:45.937049  305464 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 17:43:45.962100  305464 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 17:43:45.962211  305464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:43:46.089994  305464 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 17:43:46.077191951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 17:43:46.090094  305464 docker.go:295] overlay module found
	I0216 17:43:46.092699  305464 out.go:177] * Using the docker driver based on user configuration
	I0216 17:43:46.094830  305464 start.go:299] selected driver: docker
	I0216 17:43:46.094844  305464 start.go:903] validating driver "docker" against <nil>
	I0216 17:43:46.094862  305464 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 17:43:46.095465  305464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:43:46.185702  305464 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 17:43:46.175132129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 17:43:46.185852  305464 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 17:43:46.186066  305464 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 17:43:46.188033  305464 out.go:177] * Using Docker driver with root privileges
	I0216 17:43:46.189546  305464 cni.go:84] Creating CNI manager for ""
	I0216 17:43:46.189568  305464 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:43:46.189579  305464 start_flags.go:323] config:
	{Name:old-k8s-version-488384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-488384 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:43:46.193000  305464 out.go:177] * Starting control plane node old-k8s-version-488384 in cluster old-k8s-version-488384
	I0216 17:43:46.194846  305464 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 17:43:46.196751  305464 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 17:43:46.198455  305464 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:43:46.198499  305464 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0216 17:43:46.198513  305464 cache.go:56] Caching tarball of preloaded images
	I0216 17:43:46.198587  305464 preload.go:174] Found /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0216 17:43:46.198597  305464 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 17:43:46.198699  305464 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/config.json ...
	I0216 17:43:46.198717  305464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/config.json: {Name:mk67edff72f078cc3d0b50c1c7c7c02b0c181363 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:43:46.198857  305464 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 17:43:46.228038  305464 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 17:43:46.228058  305464 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 17:43:46.228075  305464 cache.go:194] Successfully downloaded all kic artifacts
	I0216 17:43:46.228102  305464 start.go:365] acquiring machines lock for old-k8s-version-488384: {Name:mk24b3849756b4e9198a885ebb33ae78e58e4fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 17:43:46.228212  305464 start.go:369] acquired machines lock for "old-k8s-version-488384" in 84.382µs
	I0216 17:43:46.228238  305464 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-488384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-488384 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 17:43:46.228311  305464 start.go:125] createHost starting for "" (driver="docker")
	I0216 17:43:46.230423  305464 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0216 17:43:46.230665  305464 start.go:159] libmachine.API.Create for "old-k8s-version-488384" (driver="docker")
	I0216 17:43:46.230697  305464 client.go:168] LocalClient.Create starting
	I0216 17:43:46.230783  305464 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem
	I0216 17:43:46.230814  305464 main.go:141] libmachine: Decoding PEM data...
	I0216 17:43:46.230828  305464 main.go:141] libmachine: Parsing certificate...
	I0216 17:43:46.230875  305464 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem
	I0216 17:43:46.230891  305464 main.go:141] libmachine: Decoding PEM data...
	I0216 17:43:46.230905  305464 main.go:141] libmachine: Parsing certificate...
	I0216 17:43:46.231264  305464 cli_runner.go:164] Run: docker network inspect old-k8s-version-488384 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0216 17:43:46.256101  305464 cli_runner.go:211] docker network inspect old-k8s-version-488384 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0216 17:43:46.256175  305464 network_create.go:281] running [docker network inspect old-k8s-version-488384] to gather additional debugging logs...
	I0216 17:43:46.256201  305464 cli_runner.go:164] Run: docker network inspect old-k8s-version-488384
	W0216 17:43:46.275288  305464 cli_runner.go:211] docker network inspect old-k8s-version-488384 returned with exit code 1
	I0216 17:43:46.275313  305464 network_create.go:284] error running [docker network inspect old-k8s-version-488384]: docker network inspect old-k8s-version-488384: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-488384 not found
	I0216 17:43:46.275326  305464 network_create.go:286] output of [docker network inspect old-k8s-version-488384]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-488384 not found
	
	** /stderr **
	I0216 17:43:46.275420  305464 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:43:46.299165  305464 network.go:212] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bf2219ceb1d4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fc:0a:69:d6} reservation:<nil>}
	I0216 17:43:46.299565  305464 network.go:212] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88cc490de1c4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6d:6b:26:04} reservation:<nil>}
	I0216 17:43:46.300022  305464 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024fbce0}
	I0216 17:43:46.300041  305464 network_create.go:124] attempt to create docker network old-k8s-version-488384 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0216 17:43:46.300093  305464 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-488384 old-k8s-version-488384
	I0216 17:43:46.381645  305464 network_create.go:108] docker network old-k8s-version-488384 192.168.67.0/24 created
	I0216 17:43:46.381672  305464 kic.go:121] calculated static IP "192.168.67.2" for the "old-k8s-version-488384" container
	I0216 17:43:46.381753  305464 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0216 17:43:46.402695  305464 cli_runner.go:164] Run: docker volume create old-k8s-version-488384 --label name.minikube.sigs.k8s.io=old-k8s-version-488384 --label created_by.minikube.sigs.k8s.io=true
	I0216 17:43:46.419911  305464 oci.go:103] Successfully created a docker volume old-k8s-version-488384
	I0216 17:43:46.420002  305464 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-488384-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-488384 --entrypoint /usr/bin/test -v old-k8s-version-488384:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
	I0216 17:43:47.103904  305464 oci.go:107] Successfully prepared a docker volume old-k8s-version-488384
	I0216 17:43:47.103958  305464 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:43:47.103979  305464 kic.go:194] Starting extracting preloaded images to volume ...
	I0216 17:43:47.104057  305464 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-488384:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
	I0216 17:43:51.651289  305464 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-488384:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (4.547193135s)
	I0216 17:43:51.651316  305464 kic.go:203] duration metric: took 4.547335 seconds to extract preloaded images to volume
	W0216 17:43:51.651452  305464 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0216 17:43:51.651554  305464 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0216 17:43:51.763585  305464 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-488384 --name old-k8s-version-488384 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-488384 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-488384 --network old-k8s-version-488384 --ip 192.168.67.2 --volume old-k8s-version-488384:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
	I0216 17:43:52.330643  305464 cli_runner.go:164] Run: docker container inspect old-k8s-version-488384 --format={{.State.Running}}
	I0216 17:43:52.357379  305464 cli_runner.go:164] Run: docker container inspect old-k8s-version-488384 --format={{.State.Status}}
	I0216 17:43:52.390564  305464 cli_runner.go:164] Run: docker exec old-k8s-version-488384 stat /var/lib/dpkg/alternatives/iptables
	I0216 17:43:52.459439  305464 oci.go:144] the created container "old-k8s-version-488384" has a running status.
	I0216 17:43:52.459468  305464 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa...
	I0216 17:43:53.045839  305464 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0216 17:43:53.071578  305464 cli_runner.go:164] Run: docker container inspect old-k8s-version-488384 --format={{.State.Status}}
	I0216 17:43:53.094255  305464 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0216 17:43:53.094277  305464 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-488384 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0216 17:43:53.186113  305464 cli_runner.go:164] Run: docker container inspect old-k8s-version-488384 --format={{.State.Status}}
	I0216 17:43:53.228820  305464 machine.go:88] provisioning docker machine ...
	I0216 17:43:53.228862  305464 ubuntu.go:169] provisioning hostname "old-k8s-version-488384"
	I0216 17:43:53.228930  305464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:43:53.252315  305464 main.go:141] libmachine: Using SSH client type: native
	I0216 17:43:53.252892  305464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0216 17:43:53.252909  305464 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-488384 && echo "old-k8s-version-488384" | sudo tee /etc/hostname
	I0216 17:43:53.253474  305464 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53186->127.0.0.1:33057: read: connection reset by peer
	I0216 17:43:56.441420  305464 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-488384
	
	I0216 17:43:56.441537  305464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:43:56.469959  305464 main.go:141] libmachine: Using SSH client type: native
	I0216 17:43:56.470435  305464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0216 17:43:56.470463  305464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-488384' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-488384/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-488384' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 17:43:56.633059  305464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:43:56.633097  305464 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-2208/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-2208/.minikube}
	I0216 17:43:56.633154  305464 ubuntu.go:177] setting up certificates
	I0216 17:43:56.633164  305464 provision.go:83] configureAuth start
	I0216 17:43:56.633314  305464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-488384
	I0216 17:43:56.659937  305464 provision.go:138] copyHostCerts
	I0216 17:43:56.660008  305464 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem, removing ...
	I0216 17:43:56.660022  305464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem
	I0216 17:43:56.660108  305464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem (1078 bytes)
	I0216 17:43:56.660210  305464 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem, removing ...
	I0216 17:43:56.660224  305464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem
	I0216 17:43:56.660254  305464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem (1123 bytes)
	I0216 17:43:56.660311  305464 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem, removing ...
	I0216 17:43:56.660321  305464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem
	I0216 17:43:56.660346  305464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem (1675 bytes)
	I0216 17:43:56.660412  305464 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-488384 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-488384]
	I0216 17:43:56.901566  305464 provision.go:172] copyRemoteCerts
	I0216 17:43:56.901648  305464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 17:43:56.901701  305464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:43:56.921863  305464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa Username:docker}
	I0216 17:43:57.027177  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 17:43:57.058206  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0216 17:43:57.086375  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0216 17:43:57.114924  305464 provision.go:86] duration metric: configureAuth took 481.739759ms
	I0216 17:43:57.114996  305464 ubuntu.go:193] setting minikube options for container-runtime
	I0216 17:43:57.115231  305464 config.go:182] Loaded profile config "old-k8s-version-488384": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:43:57.115311  305464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:43:57.134286  305464 main.go:141] libmachine: Using SSH client type: native
	I0216 17:43:57.134722  305464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0216 17:43:57.134735  305464 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 17:43:57.282315  305464 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 17:43:57.282335  305464 ubuntu.go:71] root file system type: overlay
	I0216 17:43:57.282451  305464 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 17:43:57.282518  305464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:43:57.303772  305464 main.go:141] libmachine: Using SSH client type: native
	I0216 17:43:57.304206  305464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0216 17:43:57.304285  305464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 17:43:57.474048  305464 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 17:43:57.474203  305464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:43:57.495804  305464 main.go:141] libmachine: Using SSH client type: native
	I0216 17:43:57.496220  305464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33057 <nil> <nil>}
	I0216 17:43:57.496238  305464 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 17:43:58.414273  305464 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-16 17:43:57.464880626 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0216 17:43:58.414304  305464 machine.go:91] provisioned docker machine in 5.18546255s
	I0216 17:43:58.414317  305464 client.go:171] LocalClient.Create took 12.183614086s
	I0216 17:43:58.414331  305464 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-488384" took 12.18366463s
	I0216 17:43:58.414357  305464 start.go:300] post-start starting for "old-k8s-version-488384" (driver="docker")
	I0216 17:43:58.414373  305464 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 17:43:58.414440  305464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 17:43:58.414486  305464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:43:58.431251  305464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa Username:docker}
	I0216 17:43:58.534236  305464 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 17:43:58.537406  305464 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 17:43:58.537448  305464 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 17:43:58.537480  305464 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 17:43:58.537494  305464 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 17:43:58.537505  305464 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/addons for local assets ...
	I0216 17:43:58.537571  305464 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/files for local assets ...
	I0216 17:43:58.537656  305464 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem -> 75132.pem in /etc/ssl/certs
	I0216 17:43:58.537765  305464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 17:43:58.546207  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /etc/ssl/certs/75132.pem (1708 bytes)
	I0216 17:43:58.571169  305464 start.go:303] post-start completed in 156.797543ms
	I0216 17:43:58.571534  305464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-488384
	I0216 17:43:58.602067  305464 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/config.json ...
	I0216 17:43:58.602353  305464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:43:58.602418  305464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:43:58.623992  305464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa Username:docker}
	I0216 17:43:58.722718  305464 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 17:43:58.727943  305464 start.go:128] duration metric: createHost completed in 12.499618522s
	I0216 17:43:58.727969  305464 start.go:83] releasing machines lock for "old-k8s-version-488384", held for 12.499748574s
	I0216 17:43:58.728038  305464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-488384
	I0216 17:43:58.755448  305464 ssh_runner.go:195] Run: cat /version.json
	I0216 17:43:58.755670  305464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:43:58.755620  305464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 17:43:58.755968  305464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:43:58.790423  305464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa Username:docker}
	I0216 17:43:58.801541  305464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33057 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa Username:docker}
	I0216 17:43:58.892105  305464 ssh_runner.go:195] Run: systemctl --version
	I0216 17:43:59.027395  305464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 17:43:59.031600  305464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 17:43:59.056897  305464 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 17:43:59.056976  305464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 17:43:59.074130  305464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 17:43:59.093821  305464 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0216 17:43:59.093884  305464 start.go:475] detecting cgroup driver to use...
	I0216 17:43:59.093930  305464 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:43:59.094054  305464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:43:59.112256  305464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 17:43:59.123216  305464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 17:43:59.133463  305464 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 17:43:59.133529  305464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 17:43:59.146448  305464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:43:59.157027  305464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 17:43:59.167090  305464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:43:59.177619  305464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 17:43:59.194612  305464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 17:43:59.205062  305464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 17:43:59.214886  305464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 17:43:59.223572  305464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:43:59.308385  305464 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 17:43:59.405630  305464 start.go:475] detecting cgroup driver to use...
	I0216 17:43:59.405675  305464 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:43:59.405725  305464 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 17:43:59.420521  305464 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 17:43:59.420597  305464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 17:43:59.434085  305464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:43:59.456016  305464 ssh_runner.go:195] Run: which cri-dockerd
	I0216 17:43:59.460151  305464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 17:43:59.468733  305464 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 17:43:59.486894  305464 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 17:43:59.605206  305464 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 17:43:59.706569  305464 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 17:43:59.706797  305464 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 17:43:59.732730  305464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:43:59.865734  305464 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:44:00.353193  305464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:44:00.395744  305464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:44:00.455023  305464 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 17:44:00.455143  305464 cli_runner.go:164] Run: docker network inspect old-k8s-version-488384 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:44:00.480695  305464 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0216 17:44:00.487221  305464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:44:00.502829  305464 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:44:00.502916  305464 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:44:00.541578  305464 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:44:00.541601  305464 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:44:00.541657  305464 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:44:00.556984  305464 ssh_runner.go:195] Run: which lz4
	I0216 17:44:00.563250  305464 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 17:44:00.568895  305464 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 17:44:00.568929  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (394173841 bytes)
	I0216 17:44:02.782914  305464 docker.go:649] Took 2.219703 seconds to copy over tarball
	I0216 17:44:02.783004  305464 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 17:44:05.379588  305464 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.596553504s)
	I0216 17:44:05.379658  305464 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 17:44:05.449612  305464 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:44:05.458377  305464 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 17:44:05.476927  305464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:44:05.570190  305464 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:44:06.850970  305464 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.280739616s)
	I0216 17:44:06.851073  305464 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:44:06.878741  305464 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:44:06.878758  305464 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:44:06.878767  305464 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 17:44:06.880710  305464 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:44:06.880806  305464 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:44:06.880924  305464 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 17:44:06.880950  305464 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:44:06.881003  305464 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 17:44:06.881092  305464 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:44:06.881137  305464 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:44:06.881897  305464 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:44:06.882487  305464 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:44:06.882936  305464 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:44:06.883097  305464 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0216 17:44:06.883213  305464 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:44:06.883337  305464 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:44:06.883642  305464 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0216 17:44:06.883907  305464 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:44:06.885129  305464 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	W0216 17:44:07.213858  305464 image.go:265] image registry.k8s.io/pause:3.1 arch mismatch: want arm64 got amd64. fixing
	I0216 17:44:07.214009  305464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	W0216 17:44:07.230578  305464 image.go:265] image registry.k8s.io/kube-scheduler:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:44:07.230794  305464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:44:07.233459  305464 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5" in container runtime
	I0216 17:44:07.233547  305464 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 17:44:07.233607  305464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	W0216 17:44:07.241230  305464 image.go:265] image registry.k8s.io/kube-apiserver:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:44:07.241420  305464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	W0216 17:44:07.260438  305464 image.go:265] image registry.k8s.io/coredns:1.6.2 arch mismatch: want arm64 got amd64. fixing
	I0216 17:44:07.260664  305464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0216 17:44:07.263689  305464 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "e82518cbd8204462b7b3756330f327ee6de72bbb84aaebc4c8cadf77c821a661" in container runtime
	I0216 17:44:07.263751  305464 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:44:07.263809  305464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	W0216 17:44:07.266406  305464 image.go:265] image registry.k8s.io/kube-controller-manager:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:44:07.266622  305464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	W0216 17:44:07.267541  305464 image.go:265] image registry.k8s.io/etcd:3.3.15-0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:44:07.267791  305464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	W0216 17:44:07.275716  305464 image.go:265] image registry.k8s.io/kube-proxy:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:44:07.275951  305464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:44:07.405266  305464 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/pause_3.1
	I0216 17:44:07.405312  305464 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "06c3d6081b24d1d3f9c703ae2e40666f3237db9490060a03c4b29894a78205ef" in container runtime
	I0216 17:44:07.405336  305464 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:44:07.405377  305464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:44:07.407371  305464 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 17:44:07.407412  305464 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "5f93833eff730f6c51ed0232bb218db5ab7bbb05ed0d460c4678d8b433670640" in container runtime
	I0216 17:44:07.407444  305464 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:44:07.407477  305464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:44:07.407521  305464 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "3f4e1b5a89fe11634ed042397d01167d866dfa3225cfed8279f54ec7f8f58486" in container runtime
	I0216 17:44:07.407534  305464 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:44:07.407553  305464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 17:44:07.407857  305464 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "553e5791dff2eebc7969b9df892ad18a487fcfa425e098ed3059173e36d98f72" in container runtime
	I0216 17:44:07.407878  305464 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:44:07.407908  305464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:44:07.410985  305464 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "051b2962d7b329402cf101d688a2de7bc400efea9dd4de77745af5d77489a847" in container runtime
	I0216 17:44:07.411043  305464 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 17:44:07.411108  305464 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 17:44:07.435743  305464 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.16.0
	I0216 17:44:07.468527  305464 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.3.15-0
	I0216 17:44:07.468627  305464 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 17:44:07.469573  305464 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.2
	I0216 17:44:07.469595  305464 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.16.0
	W0216 17:44:07.507667  305464 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0216 17:44:07.507815  305464 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:44:07.525954  305464 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0216 17:44:07.526050  305464 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:44:07.526125  305464 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:44:07.555791  305464 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0216 17:44:07.555958  305464 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0216 17:44:07.559524  305464 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0216 17:44:07.559561  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0216 17:44:07.648083  305464 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0216 17:44:07.648148  305464 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0216 17:44:07.968680  305464 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0216 17:44:07.968745  305464 cache_images.go:92] LoadImages completed in 1.089939898s
	W0216 17:44:07.968812  305464 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/pause_3.1: no such file or directory
	I0216 17:44:07.968875  305464 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 17:44:08.026505  305464 cni.go:84] Creating CNI manager for ""
	I0216 17:44:08.026531  305464 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:44:08.026547  305464 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 17:44:08.026566  305464 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-488384 NodeName:old-k8s-version-488384 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 17:44:08.026707  305464 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-488384"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-488384
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 17:44:08.026774  305464 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-488384 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-488384 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 17:44:08.026844  305464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 17:44:08.036111  305464 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 17:44:08.036186  305464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 17:44:08.045523  305464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0216 17:44:08.064272  305464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 17:44:08.083503  305464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0216 17:44:08.102845  305464 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0216 17:44:08.106515  305464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:44:08.117361  305464 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384 for IP: 192.168.67.2
	I0216 17:44:08.117431  305464 certs.go:190] acquiring lock for shared ca certs: {Name:mkc4dfb4b2b1da0d6a80fb9567025307b764443b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:44:08.117564  305464 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key
	I0216 17:44:08.117613  305464 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key
	I0216 17:44:08.117666  305464 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/client.key
	I0216 17:44:08.117681  305464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/client.crt with IP's: []
	I0216 17:44:08.490558  305464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/client.crt ...
	I0216 17:44:08.490590  305464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/client.crt: {Name:mk6be765281296a271bf945592d12193d13e3a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:44:08.490782  305464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/client.key ...
	I0216 17:44:08.490795  305464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/client.key: {Name:mkb9946397b6b825a55f47de800205d5cf9aacff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:44:08.490881  305464 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.key.c7fa3a9e
	I0216 17:44:08.490904  305464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0216 17:44:08.899550  305464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.crt.c7fa3a9e ...
	I0216 17:44:08.899578  305464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.crt.c7fa3a9e: {Name:mkfcbc3980467b24265bec39c546f72a3c038e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:44:08.900118  305464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.key.c7fa3a9e ...
	I0216 17:44:08.900136  305464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.key.c7fa3a9e: {Name:mk2cb1c7d0f1444f376d1baa587b07633a0736ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:44:08.900226  305464 certs.go:337] copying /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.crt
	I0216 17:44:08.900306  305464 certs.go:341] copying /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.key
	I0216 17:44:08.900363  305464 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.key
	I0216 17:44:08.900381  305464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.crt with IP's: []
	I0216 17:44:09.182378  305464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.crt ...
	I0216 17:44:09.182408  305464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.crt: {Name:mk1637ac314524cbe2b7f2a1e52f306bab041e7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:44:09.182585  305464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.key ...
	I0216 17:44:09.182600  305464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.key: {Name:mkec021588dc3dd0f104d522dedf7c0eb10e8606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:44:09.183230  305464 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem (1338 bytes)
	W0216 17:44:09.183274  305464 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513_empty.pem, impossibly tiny 0 bytes
	I0216 17:44:09.183287  305464 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 17:44:09.183314  305464 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem (1078 bytes)
	I0216 17:44:09.183347  305464 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem (1123 bytes)
	I0216 17:44:09.183378  305464 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem (1675 bytes)
	I0216 17:44:09.183429  305464 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem (1708 bytes)
	I0216 17:44:09.184056  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 17:44:09.209721  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 17:44:09.235124  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 17:44:09.263947  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 17:44:09.292559  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 17:44:09.319693  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0216 17:44:09.350278  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 17:44:09.379195  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0216 17:44:09.404694  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 17:44:09.430101  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem --> /usr/share/ca-certificates/7513.pem (1338 bytes)
	I0216 17:44:09.460339  305464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /usr/share/ca-certificates/75132.pem (1708 bytes)
	I0216 17:44:09.485639  305464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 17:44:09.504311  305464 ssh_runner.go:195] Run: openssl version
	I0216 17:44:09.510095  305464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75132.pem && ln -fs /usr/share/ca-certificates/75132.pem /etc/ssl/certs/75132.pem"
	I0216 17:44:09.519574  305464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75132.pem
	I0216 17:44:09.523190  305464 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:48 /usr/share/ca-certificates/75132.pem
	I0216 17:44:09.523279  305464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75132.pem
	I0216 17:44:09.530358  305464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75132.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 17:44:09.539765  305464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 17:44:09.548940  305464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:44:09.552401  305464 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:44:09.552466  305464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:44:09.559783  305464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 17:44:09.569557  305464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7513.pem && ln -fs /usr/share/ca-certificates/7513.pem /etc/ssl/certs/7513.pem"
	I0216 17:44:09.579459  305464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7513.pem
	I0216 17:44:09.583005  305464 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:48 /usr/share/ca-certificates/7513.pem
	I0216 17:44:09.583076  305464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7513.pem
	I0216 17:44:09.590599  305464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7513.pem /etc/ssl/certs/51391683.0"
	I0216 17:44:09.600265  305464 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 17:44:09.604032  305464 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0216 17:44:09.604109  305464 kubeadm.go:404] StartCluster: {Name:old-k8s-version-488384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-488384 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:44:09.604236  305464 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:44:09.621173  305464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 17:44:09.630943  305464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:44:09.640105  305464 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:44:09.640173  305464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:44:09.649747  305464 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:44:09.649793  305464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:44:09.878886  305464 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:44:09.938656  305464 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:44:09.938963  305464 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 17:44:10.028999  305464 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:48:18.580985  305464 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:48:18.581096  305464 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:48:18.584855  305464 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:48:18.584922  305464 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:48:18.585157  305464 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:48:18.585241  305464 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 17:48:18.585302  305464 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:48:18.585339  305464 kubeadm.go:322] OS: Linux
	I0216 17:48:18.585399  305464 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:48:18.585457  305464 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:48:18.585514  305464 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:48:18.585579  305464 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:48:18.585644  305464 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:48:18.585722  305464 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:48:18.585821  305464 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:48:18.585926  305464 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:48:18.586029  305464 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:48:18.586152  305464 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:48:18.586246  305464 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:48:18.586308  305464 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:48:18.586378  305464 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:48:18.588552  305464 out.go:204]   - Generating certificates and keys ...
	I0216 17:48:18.588664  305464 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:48:18.588733  305464 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:48:18.588801  305464 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0216 17:48:18.588858  305464 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0216 17:48:18.588918  305464 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0216 17:48:18.588970  305464 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0216 17:48:18.589024  305464 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0216 17:48:18.589145  305464 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-488384 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0216 17:48:18.589197  305464 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0216 17:48:18.589314  305464 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-488384 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0216 17:48:18.589377  305464 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0216 17:48:18.589447  305464 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0216 17:48:18.589494  305464 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0216 17:48:18.589553  305464 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:48:18.589604  305464 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:48:18.589657  305464 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:48:18.589720  305464 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:48:18.589773  305464 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:48:18.589838  305464 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:48:18.591974  305464 out.go:204]   - Booting up control plane ...
	I0216 17:48:18.592112  305464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:48:18.592221  305464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:48:18.592295  305464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:48:18.592379  305464 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:48:18.592529  305464 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:48:18.592578  305464 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:48:18.592588  305464 kubeadm.go:322] 
	I0216 17:48:18.592625  305464 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:48:18.592753  305464 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:48:18.592766  305464 kubeadm.go:322] 
	I0216 17:48:18.592825  305464 kubeadm.go:322] This error is likely caused by:
	I0216 17:48:18.592871  305464 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:48:18.592981  305464 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:48:18.592992  305464 kubeadm.go:322] 
	I0216 17:48:18.593089  305464 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:48:18.593130  305464 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:48:18.593170  305464 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:48:18.593181  305464 kubeadm.go:322] 
	I0216 17:48:18.593277  305464 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:48:18.593377  305464 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:48:18.593457  305464 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:48:18.593506  305464 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:48:18.593587  305464 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	W0216 17:48:18.593753  305464 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-488384 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-488384 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-488384 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-488384 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 17:48:18.593809  305464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:48:18.594062  305464 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:48:19.406942  305464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:48:19.418708  305464 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:48:19.418775  305464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:48:19.427933  305464 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:48:19.427977  305464 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:48:19.486298  305464 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:48:19.486637  305464 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:48:19.684847  305464 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:48:19.684927  305464 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 17:48:19.684979  305464 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:48:19.685017  305464 kubeadm.go:322] OS: Linux
	I0216 17:48:19.685063  305464 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:48:19.685112  305464 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:48:19.685159  305464 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:48:19.685204  305464 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:48:19.685251  305464 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:48:19.685294  305464 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:48:19.779568  305464 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:48:19.779849  305464 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:48:19.779964  305464 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:48:19.993242  305464 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:48:19.995165  305464 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:48:20.005297  305464 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:48:20.099817  305464 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:48:20.102229  305464 out.go:204]   - Generating certificates and keys ...
	I0216 17:48:20.102567  305464 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:48:20.103406  305464 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:48:20.105503  305464 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:48:20.106142  305464 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:48:20.106381  305464 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:48:20.106697  305464 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:48:20.107285  305464 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:48:20.107871  305464 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:48:20.108490  305464 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:48:20.109086  305464 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:48:20.109282  305464 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:48:20.109362  305464 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:48:20.370456  305464 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:48:20.660810  305464 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:48:21.120291  305464 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:48:21.586099  305464 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:48:21.587413  305464 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:48:21.589875  305464 out.go:204]   - Booting up control plane ...
	I0216 17:48:21.589973  305464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:48:21.600186  305464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:48:21.601663  305464 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:48:21.607908  305464 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:48:21.627176  305464 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:49:01.628166  305464 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 17:52:21.628720  305464 kubeadm.go:322] 
	I0216 17:52:21.628851  305464 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 17:52:21.628938  305464 kubeadm.go:322] 	timed out waiting for the condition
	I0216 17:52:21.628973  305464 kubeadm.go:322] 
	I0216 17:52:21.629038  305464 kubeadm.go:322] This error is likely caused by:
	I0216 17:52:21.629088  305464 kubeadm.go:322] 	- The kubelet is not running
	I0216 17:52:21.629225  305464 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 17:52:21.629251  305464 kubeadm.go:322] 
	I0216 17:52:21.629386  305464 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 17:52:21.629448  305464 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 17:52:21.629508  305464 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 17:52:21.629533  305464 kubeadm.go:322] 
	I0216 17:52:21.629667  305464 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 17:52:21.629783  305464 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 17:52:21.629931  305464 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 17:52:21.630032  305464 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 17:52:21.630116  305464 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 17:52:21.630149  305464 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 17:52:21.634731  305464 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 17:52:21.634878  305464 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 17:52:21.635092  305464 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 17:52:21.635197  305464 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 17:52:21.635282  305464 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 17:52:21.635348  305464 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 17:52:21.635404  305464 kubeadm.go:406] StartCluster complete in 8m12.031297885s
	I0216 17:52:21.635484  305464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:52:21.657053  305464 logs.go:276] 0 containers: []
	W0216 17:52:21.657115  305464 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:52:21.657219  305464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:52:21.708381  305464 logs.go:276] 0 containers: []
	W0216 17:52:21.708445  305464 logs.go:278] No container was found matching "etcd"
	I0216 17:52:21.708519  305464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:52:21.750765  305464 logs.go:276] 0 containers: []
	W0216 17:52:21.750832  305464 logs.go:278] No container was found matching "coredns"
	I0216 17:52:21.750906  305464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:52:21.771360  305464 logs.go:276] 0 containers: []
	W0216 17:52:21.771416  305464 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:52:21.771487  305464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:52:21.810962  305464 logs.go:276] 0 containers: []
	W0216 17:52:21.811035  305464 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:52:21.811108  305464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:52:21.835509  305464 logs.go:276] 0 containers: []
	W0216 17:52:21.835571  305464 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:52:21.835649  305464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:52:21.870681  305464 logs.go:276] 0 containers: []
	W0216 17:52:21.870747  305464 logs.go:278] No container was found matching "kindnet"
	I0216 17:52:21.870771  305464 logs.go:123] Gathering logs for kubelet ...
	I0216 17:52:21.870797  305464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:52:21.906074  305464 logs.go:138] Found kubelet problem: Feb 16 17:52:01 old-k8s-version-488384 kubelet[5456]: E0216 17:52:01.519546    5456 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:52:21.918767  305464 logs.go:138] Found kubelet problem: Feb 16 17:52:06 old-k8s-version-488384 kubelet[5456]: E0216 17:52:06.513912    5456 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:52:21.921872  305464 logs.go:138] Found kubelet problem: Feb 16 17:52:07 old-k8s-version-488384 kubelet[5456]: E0216 17:52:07.514080    5456 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:52:21.927091  305464 logs.go:138] Found kubelet problem: Feb 16 17:52:09 old-k8s-version-488384 kubelet[5456]: E0216 17:52:09.513366    5456 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:52:21.940063  305464 logs.go:138] Found kubelet problem: Feb 16 17:52:14 old-k8s-version-488384 kubelet[5456]: E0216 17:52:14.519080    5456 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:52:21.947385  305464 logs.go:138] Found kubelet problem: Feb 16 17:52:17 old-k8s-version-488384 kubelet[5456]: E0216 17:52:17.519955    5456 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:52:21.957673  305464 logs.go:138] Found kubelet problem: Feb 16 17:52:21 old-k8s-version-488384 kubelet[5456]: E0216 17:52:21.520708    5456 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:52:21.959080  305464 logs.go:123] Gathering logs for dmesg ...
	I0216 17:52:21.959128  305464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:52:21.989349  305464 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:52:21.989490  305464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:52:22.139986  305464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:52:22.140045  305464 logs.go:123] Gathering logs for Docker ...
	I0216 17:52:22.140065  305464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:52:22.162817  305464 logs.go:123] Gathering logs for container status ...
	I0216 17:52:22.162848  305464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0216 17:52:22.221718  305464 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 17:52:22.221799  305464 out.go:239] * 
	* 
	W0216 17:52:22.221879  305464 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:52:22.222182  305464 out.go:239] * 
	* 
	W0216 17:52:22.223218  305464 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 17:52:22.249058  305464 out.go:177] X Problems detected in kubelet:
	I0216 17:52:22.268938  305464 out.go:177]   Feb 16 17:52:01 old-k8s-version-488384 kubelet[5456]: E0216 17:52:01.519546    5456 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:52:22.282915  305464 out.go:177]   Feb 16 17:52:06 old-k8s-version-488384 kubelet[5456]: E0216 17:52:06.513912    5456 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:52:22.298636  305464 out.go:177]   Feb 16 17:52:07 old-k8s-version-488384 kubelet[5456]: E0216 17:52:07.514080    5456 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:52:22.313884  305464 out.go:177] 
	W0216 17:52:22.321031  305464 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 17:52:22.321104  305464 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 17:52:22.321124  305464 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 17:52:22.354669  305464 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p old-k8s-version-488384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-488384
helpers_test.go:235: (dbg) docker inspect old-k8s-version-488384:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d",
	        "Created": "2024-02-16T17:43:51.781636674Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306435,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:43:52.321549666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hosts",
	        "LogPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d-json.log",
	        "Name": "/old-k8s-version-488384",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-488384:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-488384",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621-init/diff:/var/lib/docker/overlay2/946a7b4f2791bd4745aa26fd1fdd5eefb03c154f3c1fd517458d1937bbb85039/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-488384",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-488384/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-488384",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e99aebf69c0f66ee8799a92fbb9d9b35c73420d9fe016ee29fa9d199d23cdde",
	            "SandboxKey": "/var/run/docker/netns/1e99aebf69c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-488384": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2ad7a05058fe",
	                        "old-k8s-version-488384"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "066a8ce33ebb9a8bf9130346706c7668acc42f9f2a9352243a5b99995ed10eb4",
	                    "EndpointID": "7f089d26c6ab52b2462e3bb9dd4eddece50cdc6dbfc4abbff695c3e2d07b6874",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-488384",
	                        "2ad7a05058fe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384: exit status 6 (352.007409ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:52:22.821493  335872 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-488384" does not appear in /home/jenkins/minikube-integration/17936-2208/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-488384" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (517.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-488384 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-488384 create -f testdata/busybox.yaml: exit status 1 (56.834019ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-488384" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-488384 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-488384
helpers_test.go:235: (dbg) docker inspect old-k8s-version-488384:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d",
	        "Created": "2024-02-16T17:43:51.781636674Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306435,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:43:52.321549666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hosts",
	        "LogPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d-json.log",
	        "Name": "/old-k8s-version-488384",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-488384:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-488384",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621-init/diff:/var/lib/docker/overlay2/946a7b4f2791bd4745aa26fd1fdd5eefb03c154f3c1fd517458d1937bbb85039/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-488384",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-488384/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-488384",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e99aebf69c0f66ee8799a92fbb9d9b35c73420d9fe016ee29fa9d199d23cdde",
	            "SandboxKey": "/var/run/docker/netns/1e99aebf69c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-488384": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2ad7a05058fe",
	                        "old-k8s-version-488384"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "066a8ce33ebb9a8bf9130346706c7668acc42f9f2a9352243a5b99995ed10eb4",
	                    "EndpointID": "7f089d26c6ab52b2462e3bb9dd4eddece50cdc6dbfc4abbff695c3e2d07b6874",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-488384",
	                        "2ad7a05058fe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384: exit status 6 (333.387744ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:52:23.231002  335937 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-488384" does not appear in /home/jenkins/minikube-integration/17936-2208/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-488384" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-488384
helpers_test.go:235: (dbg) docker inspect old-k8s-version-488384:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d",
	        "Created": "2024-02-16T17:43:51.781636674Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306435,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:43:52.321549666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hosts",
	        "LogPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d-json.log",
	        "Name": "/old-k8s-version-488384",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-488384:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-488384",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621-init/diff:/var/lib/docker/overlay2/946a7b4f2791bd4745aa26fd1fdd5eefb03c154f3c1fd517458d1937bbb85039/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-488384",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-488384/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-488384",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e99aebf69c0f66ee8799a92fbb9d9b35c73420d9fe016ee29fa9d199d23cdde",
	            "SandboxKey": "/var/run/docker/netns/1e99aebf69c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-488384": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2ad7a05058fe",
	                        "old-k8s-version-488384"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "066a8ce33ebb9a8bf9130346706c7668acc42f9f2a9352243a5b99995ed10eb4",
	                    "EndpointID": "7f089d26c6ab52b2462e3bb9dd4eddece50cdc6dbfc4abbff695c3e2d07b6874",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-488384",
	                        "2ad7a05058fe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384: exit status 6 (341.419075ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:52:23.590883  335998 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-488384" does not appear in /home/jenkins/minikube-integration/17936-2208/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-488384" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (104.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-488384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0216 17:52:39.371336    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:52:40.171034    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-488384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m43.831293639s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-488384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-488384 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-488384 describe deploy/metrics-server -n kube-system: exit status 1 (60.07827ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-488384" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-488384 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-488384
helpers_test.go:235: (dbg) docker inspect old-k8s-version-488384:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d",
	        "Created": "2024-02-16T17:43:51.781636674Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306435,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:43:52.321549666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hosts",
	        "LogPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d-json.log",
	        "Name": "/old-k8s-version-488384",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-488384:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-488384",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621-init/diff:/var/lib/docker/overlay2/946a7b4f2791bd4745aa26fd1fdd5eefb03c154f3c1fd517458d1937bbb85039/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-488384",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-488384/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-488384",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e99aebf69c0f66ee8799a92fbb9d9b35c73420d9fe016ee29fa9d199d23cdde",
	            "SandboxKey": "/var/run/docker/netns/1e99aebf69c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-488384": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2ad7a05058fe",
	                        "old-k8s-version-488384"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "066a8ce33ebb9a8bf9130346706c7668acc42f9f2a9352243a5b99995ed10eb4",
	                    "EndpointID": "7f089d26c6ab52b2462e3bb9dd4eddece50cdc6dbfc4abbff695c3e2d07b6874",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-488384",
	                        "2ad7a05058fe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384: exit status 6 (311.311038ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:54:07.814052  345178 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-488384" does not appear in /home/jenkins/minikube-integration/17936-2208/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-488384" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (104.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (767.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-488384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0216 17:54:50.355681    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 17:54:55.529498    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:55:11.567646    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
E0216 17:55:23.211500    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:55:23.334606    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 17:55:56.303194    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:56:13.404379    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 17:56:21.542740    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:21.548086    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:21.558337    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:21.578671    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:21.618982    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:21.699199    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:21.859631    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:22.180182    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:22.820866    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:24.101537    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:26.662494    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:31.316579    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
E0216 17:56:31.783518    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:42.024021    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:56:42.639637    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:57:02.504234    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:57:03.077607    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:57:40.171162    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:57:43.464621    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:57:54.360220    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
E0216 17:58:09.423869    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:58:21.178075    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:58:37.210852    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-488384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: exit status 109 (12m45.998071892s)

                                                
                                                
-- stdout --
	* [old-k8s-version-488384] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-488384 in cluster old-k8s-version-488384
	* Pulling base image v0.0.42-1708008208-17936 ...
	* Restarting existing docker container for "old-k8s-version-488384" ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 16 18:06:33 old-k8s-version-488384 kubelet[10015]: E0216 18:06:33.601902   10015 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 18:06:34 old-k8s-version-488384 kubelet[10015]: E0216 18:06:34.598572   10015 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 18:06:35 old-k8s-version-488384 kubelet[10015]: E0216 18:06:35.599305   10015 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:54:09.348377  345500 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:54:09.348626  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:54:09.348680  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:54:09.348701  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:54:09.348991  345500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 17:54:09.349462  345500 out.go:298] Setting JSON to false
	I0216 17:54:09.350646  345500 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5799,"bootTime":1708100250,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 17:54:09.350750  345500 start.go:139] virtualization:  
	I0216 17:54:09.353099  345500 out.go:177] * [old-k8s-version-488384] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 17:54:09.354946  345500 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 17:54:09.356848  345500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 17:54:09.355044  345500 notify.go:220] Checking for updates...
	I0216 17:54:09.361214  345500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 17:54:09.363200  345500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 17:54:09.365696  345500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0216 17:54:09.367632  345500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 17:54:09.370019  345500 config.go:182] Loaded profile config "old-k8s-version-488384": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:54:09.372472  345500 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0216 17:54:09.374433  345500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 17:54:09.403079  345500 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 17:54:09.403201  345500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:54:09.469216  345500 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 17:54:09.459095871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 17:54:09.469320  345500 docker.go:295] overlay module found
	I0216 17:54:09.473033  345500 out.go:177] * Using the docker driver based on existing profile
	I0216 17:54:09.474877  345500 start.go:299] selected driver: docker
	I0216 17:54:09.474896  345500 start.go:903] validating driver "docker" against &{Name:old-k8s-version-488384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-488384 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:54:09.475060  345500 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 17:54:09.475890  345500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:54:09.542106  345500 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 17:54:09.531445607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 17:54:09.542473  345500 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 17:54:09.542542  345500 cni.go:84] Creating CNI manager for ""
	I0216 17:54:09.542558  345500 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:54:09.542569  345500 start_flags.go:323] config:
	{Name:old-k8s-version-488384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-488384 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:54:09.545932  345500 out.go:177] * Starting control plane node old-k8s-version-488384 in cluster old-k8s-version-488384
	I0216 17:54:09.547625  345500 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 17:54:09.549543  345500 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 17:54:09.551266  345500 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:54:09.551292  345500 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 17:54:09.551310  345500 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0216 17:54:09.551320  345500 cache.go:56] Caching tarball of preloaded images
	I0216 17:54:09.551395  345500 preload.go:174] Found /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0216 17:54:09.551405  345500 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0216 17:54:09.551528  345500 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/config.json ...
	I0216 17:54:09.570937  345500 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 17:54:09.570962  345500 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 17:54:09.570986  345500 cache.go:194] Successfully downloaded all kic artifacts
	I0216 17:54:09.571022  345500 start.go:365] acquiring machines lock for old-k8s-version-488384: {Name:mk24b3849756b4e9198a885ebb33ae78e58e4fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 17:54:09.571140  345500 start.go:369] acquired machines lock for "old-k8s-version-488384" in 56.936µs
	I0216 17:54:09.571168  345500 start.go:96] Skipping create...Using existing machine configuration
	I0216 17:54:09.571178  345500 fix.go:54] fixHost starting: 
	I0216 17:54:09.571445  345500 cli_runner.go:164] Run: docker container inspect old-k8s-version-488384 --format={{.State.Status}}
	I0216 17:54:09.587381  345500 fix.go:102] recreateIfNeeded on old-k8s-version-488384: state=Stopped err=<nil>
	W0216 17:54:09.587409  345500 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 17:54:09.589640  345500 out.go:177] * Restarting existing docker container for "old-k8s-version-488384" ...
	I0216 17:54:09.591468  345500 cli_runner.go:164] Run: docker start old-k8s-version-488384
	I0216 17:54:09.902261  345500 cli_runner.go:164] Run: docker container inspect old-k8s-version-488384 --format={{.State.Status}}
	I0216 17:54:09.927392  345500 kic.go:430] container "old-k8s-version-488384" state is running.
	I0216 17:54:09.928001  345500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-488384
	I0216 17:54:09.956817  345500 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/config.json ...
	I0216 17:54:09.957051  345500 machine.go:88] provisioning docker machine ...
	I0216 17:54:09.957068  345500 ubuntu.go:169] provisioning hostname "old-k8s-version-488384"
	I0216 17:54:09.957126  345500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:54:09.976619  345500 main.go:141] libmachine: Using SSH client type: native
	I0216 17:54:09.977073  345500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33082 <nil> <nil>}
	I0216 17:54:09.977246  345500 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-488384 && echo "old-k8s-version-488384" | sudo tee /etc/hostname
	I0216 17:54:09.978409  345500 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36866->127.0.0.1:33082: read: connection reset by peer
	I0216 17:54:13.133819  345500 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-488384
	
	I0216 17:54:13.133960  345500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:54:13.153624  345500 main.go:141] libmachine: Using SSH client type: native
	I0216 17:54:13.154030  345500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33082 <nil> <nil>}
	I0216 17:54:13.154048  345500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-488384' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-488384/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-488384' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 17:54:13.296680  345500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:54:13.296714  345500 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-2208/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-2208/.minikube}
	I0216 17:54:13.296734  345500 ubuntu.go:177] setting up certificates
	I0216 17:54:13.296749  345500 provision.go:83] configureAuth start
	I0216 17:54:13.296813  345500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-488384
	I0216 17:54:13.312962  345500 provision.go:138] copyHostCerts
	I0216 17:54:13.313041  345500 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem, removing ...
	I0216 17:54:13.313053  345500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem
	I0216 17:54:13.313130  345500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem (1078 bytes)
	I0216 17:54:13.313233  345500 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem, removing ...
	I0216 17:54:13.313244  345500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem
	I0216 17:54:13.313272  345500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem (1123 bytes)
	I0216 17:54:13.313325  345500 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem, removing ...
	I0216 17:54:13.313334  345500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem
	I0216 17:54:13.313358  345500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem (1675 bytes)
	I0216 17:54:13.313404  345500 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-488384 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-488384]
	I0216 17:54:13.989716  345500 provision.go:172] copyRemoteCerts
	I0216 17:54:13.989795  345500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 17:54:13.989838  345500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:54:14.006668  345500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa Username:docker}
	I0216 17:54:14.109936  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0216 17:54:14.136127  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0216 17:54:14.164511  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 17:54:14.190063  345500 provision.go:86] duration metric: configureAuth took 893.297429ms
	I0216 17:54:14.190127  345500 ubuntu.go:193] setting minikube options for container-runtime
	I0216 17:54:14.190353  345500 config.go:182] Loaded profile config "old-k8s-version-488384": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0216 17:54:14.190435  345500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:54:14.207853  345500 main.go:141] libmachine: Using SSH client type: native
	I0216 17:54:14.208245  345500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33082 <nil> <nil>}
	I0216 17:54:14.208254  345500 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 17:54:14.354059  345500 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 17:54:14.354082  345500 ubuntu.go:71] root file system type: overlay
	I0216 17:54:14.354198  345500 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 17:54:14.354269  345500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:54:14.372100  345500 main.go:141] libmachine: Using SSH client type: native
	I0216 17:54:14.372554  345500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33082 <nil> <nil>}
	I0216 17:54:14.372716  345500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 17:54:14.529830  345500 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 17:54:14.529945  345500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:54:14.546562  345500 main.go:141] libmachine: Using SSH client type: native
	I0216 17:54:14.546973  345500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33082 <nil> <nil>}
	I0216 17:54:14.546992  345500 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 17:54:14.700371  345500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 17:54:14.700393  345500 machine.go:91] provisioned docker machine in 4.743332375s
	I0216 17:54:14.700403  345500 start.go:300] post-start starting for "old-k8s-version-488384" (driver="docker")
	I0216 17:54:14.700416  345500 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 17:54:14.700533  345500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 17:54:14.700574  345500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:54:14.717930  345500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa Username:docker}
	I0216 17:54:14.821850  345500 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 17:54:14.825296  345500 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 17:54:14.825383  345500 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 17:54:14.825415  345500 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 17:54:14.825430  345500 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 17:54:14.825446  345500 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/addons for local assets ...
	I0216 17:54:14.825540  345500 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/files for local assets ...
	I0216 17:54:14.825630  345500 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem -> 75132.pem in /etc/ssl/certs
	I0216 17:54:14.825740  345500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 17:54:14.834635  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /etc/ssl/certs/75132.pem (1708 bytes)
	I0216 17:54:14.860147  345500 start.go:303] post-start completed in 159.729191ms
	I0216 17:54:14.860310  345500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:54:14.860410  345500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:54:14.876937  345500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa Username:docker}
	I0216 17:54:14.973637  345500 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 17:54:14.978336  345500 fix.go:56] fixHost completed within 5.407152359s
	I0216 17:54:14.978362  345500 start.go:83] releasing machines lock for "old-k8s-version-488384", held for 5.40720754s
	I0216 17:54:14.978434  345500 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-488384
	I0216 17:54:14.994990  345500 ssh_runner.go:195] Run: cat /version.json
	I0216 17:54:14.995023  345500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 17:54:14.995054  345500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:54:14.995081  345500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-488384
	I0216 17:54:15.035698  345500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa Username:docker}
	I0216 17:54:15.043207  345500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/old-k8s-version-488384/id_rsa Username:docker}
	I0216 17:54:15.136902  345500 ssh_runner.go:195] Run: systemctl --version
	I0216 17:54:15.282630  345500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0216 17:54:15.287153  345500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0216 17:54:15.287283  345500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0216 17:54:15.296770  345500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0216 17:54:15.306225  345500 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0216 17:54:15.306259  345500 start.go:475] detecting cgroup driver to use...
	I0216 17:54:15.306291  345500 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:54:15.306393  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:54:15.324008  345500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0216 17:54:15.334580  345500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 17:54:15.345343  345500 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 17:54:15.345415  345500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 17:54:15.355342  345500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:54:15.365634  345500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 17:54:15.376222  345500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 17:54:15.386210  345500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 17:54:15.395877  345500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 17:54:15.406682  345500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 17:54:15.415431  345500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 17:54:15.424155  345500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:54:15.519105  345500 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 17:54:15.606237  345500 start.go:475] detecting cgroup driver to use...
	I0216 17:54:15.606281  345500 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 17:54:15.606354  345500 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 17:54:15.622967  345500 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 17:54:15.623114  345500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 17:54:15.641777  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 17:54:15.671900  345500 ssh_runner.go:195] Run: which cri-dockerd
	I0216 17:54:15.675731  345500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 17:54:15.686390  345500 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 17:54:15.710690  345500 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 17:54:15.819934  345500 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 17:54:15.921727  345500 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 17:54:15.921913  345500 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 17:54:15.942872  345500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:54:16.035726  345500 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:54:16.324055  345500 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:54:16.347042  345500 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 17:54:16.373505  345500 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0216 17:54:16.373644  345500 cli_runner.go:164] Run: docker network inspect old-k8s-version-488384 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 17:54:16.389091  345500 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0216 17:54:16.392865  345500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:54:16.403868  345500 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 17:54:16.403933  345500 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:54:16.422099  345500 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:54:16.422119  345500 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:54:16.422183  345500 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:54:16.431306  345500 ssh_runner.go:195] Run: which lz4
	I0216 17:54:16.435101  345500 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0216 17:54:16.438800  345500 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0216 17:54:16.438840  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (394173841 bytes)
	I0216 17:54:18.317188  345500 docker.go:649] Took 1.882128 seconds to copy over tarball
	I0216 17:54:18.317335  345500 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0216 17:54:20.817594  345500 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.500213676s)
	I0216 17:54:20.817621  345500 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0216 17:54:20.892461  345500 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0216 17:54:20.901526  345500 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0216 17:54:20.919425  345500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 17:54:21.009736  345500 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 17:54:22.181295  345500 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.171527159s)
	I0216 17:54:22.181408  345500 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 17:54:22.200740  345500 docker.go:685] Got preloaded images: -- stdout --
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0216 17:54:22.200761  345500 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0216 17:54:22.200772  345500 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0216 17:54:22.202621  345500 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:54:22.202803  345500 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0216 17:54:22.202939  345500 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:54:22.203095  345500 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:54:22.203186  345500 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:54:22.203249  345500 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:54:22.203474  345500 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0216 17:54:22.203545  345500 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:54:22.203672  345500 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:54:22.205157  345500 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:54:22.205495  345500 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:54:22.205682  345500 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:54:22.205822  345500 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:54:22.205953  345500 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:54:22.206152  345500 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0216 17:54:22.206548  345500 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	W0216 17:54:22.541439  345500 image.go:265] image registry.k8s.io/pause:3.1 arch mismatch: want arm64 got amd64. fixing
	W0216 17:54:22.544401  345500 image.go:265] image registry.k8s.io/kube-apiserver:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:54:22.551121  345500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	W0216 17:54:22.546438  345500 image.go:265] image registry.k8s.io/kube-scheduler:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:54:22.548392  345500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0216 17:54:22.552616  345500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	W0216 17:54:22.561060  345500 image.go:265] image registry.k8s.io/coredns:1.6.2 arch mismatch: want arm64 got amd64. fixing
	I0216 17:54:22.561276  345500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	W0216 17:54:22.573943  345500 image.go:265] image registry.k8s.io/kube-proxy:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:54:22.574116  345500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	W0216 17:54:22.574820  345500 image.go:265] image registry.k8s.io/kube-controller-manager:v1.16.0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:54:22.574956  345500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	W0216 17:54:22.591652  345500 image.go:265] image registry.k8s.io/etcd:3.3.15-0 arch mismatch: want arm64 got amd64. fixing
	I0216 17:54:22.591859  345500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0216 17:54:22.618030  345500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "06c3d6081b24d1d3f9c703ae2e40666f3237db9490060a03c4b29894a78205ef" in container runtime
	I0216 17:54:22.618077  345500 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:54:22.618129  345500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0216 17:54:22.618239  345500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "e82518cbd8204462b7b3756330f327ee6de72bbb84aaebc4c8cadf77c821a661" in container runtime
	I0216 17:54:22.618263  345500 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:54:22.618290  345500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0216 17:54:22.618368  345500 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "051b2962d7b329402cf101d688a2de7bc400efea9dd4de77745af5d77489a847" in container runtime
	I0216 17:54:22.618391  345500 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0216 17:54:22.618423  345500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0216 17:54:22.618493  345500 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5" in container runtime
	I0216 17:54:22.618513  345500 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0216 17:54:22.618541  345500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0216 17:54:22.669636  345500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "553e5791dff2eebc7969b9df892ad18a487fcfa425e098ed3059173e36d98f72" in container runtime
	I0216 17:54:22.669681  345500 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:54:22.669744  345500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0216 17:54:22.703708  345500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "5f93833eff730f6c51ed0232bb218db5ab7bbb05ed0d460c4678d8b433670640" in container runtime
	I0216 17:54:22.703750  345500 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0216 17:54:22.703802  345500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	W0216 17:54:22.707107  345500 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0216 17:54:22.707273  345500 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:54:22.709948  345500 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "3f4e1b5a89fe11634ed042397d01167d866dfa3225cfed8279f54ec7f8f58486" in container runtime
	I0216 17:54:22.709986  345500 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0216 17:54:22.710036  345500 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0216 17:54:22.710187  345500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.16.0
	I0216 17:54:22.710240  345500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/pause_3.1
	I0216 17:54:22.710381  345500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.16.0
	I0216 17:54:22.734019  345500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.16.0
	I0216 17:54:22.734306  345500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.2
	I0216 17:54:22.763151  345500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0216 17:54:22.763193  345500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.3.15-0
	I0216 17:54:22.763196  345500 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:54:22.763271  345500 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 17:54:22.763521  345500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0216 17:54:22.786391  345500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0216 17:54:22.786512  345500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0216 17:54:22.790107  345500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0216 17:54:22.790130  345500 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0216 17:54:22.790143  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0216 17:54:22.847409  345500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0216 17:54:22.847504  345500 cache_images.go:92] LoadImages completed in 646.720783ms
	W0216 17:54:22.847570  345500 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-2208/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0216 17:54:22.847638  345500 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 17:54:22.908356  345500 cni.go:84] Creating CNI manager for ""
	I0216 17:54:22.908384  345500 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 17:54:22.908401  345500 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 17:54:22.908420  345500 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-488384 NodeName:old-k8s-version-488384 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0216 17:54:22.908561  345500 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-488384"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-488384
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 17:54:22.908668  345500 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-488384 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-488384 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 17:54:22.908743  345500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0216 17:54:22.918448  345500 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 17:54:22.918538  345500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 17:54:22.927821  345500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0216 17:54:22.947930  345500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 17:54:22.966494  345500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0216 17:54:22.985497  345500 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0216 17:54:22.989078  345500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 17:54:23.000400  345500 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384 for IP: 192.168.67.2
	I0216 17:54:23.000434  345500 certs.go:190] acquiring lock for shared ca certs: {Name:mkc4dfb4b2b1da0d6a80fb9567025307b764443b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:54:23.000617  345500 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key
	I0216 17:54:23.000789  345500 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key
	I0216 17:54:23.000898  345500 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/client.key
	I0216 17:54:23.000964  345500 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.key.c7fa3a9e
	I0216 17:54:23.001039  345500 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.key
	I0216 17:54:23.001156  345500 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem (1338 bytes)
	W0216 17:54:23.001200  345500 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513_empty.pem, impossibly tiny 0 bytes
	I0216 17:54:23.001210  345500 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 17:54:23.001236  345500 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem (1078 bytes)
	I0216 17:54:23.001267  345500 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem (1123 bytes)
	I0216 17:54:23.001297  345500 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem (1675 bytes)
	I0216 17:54:23.001348  345500 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem (1708 bytes)
	I0216 17:54:23.002063  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 17:54:23.027517  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 17:54:23.053213  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 17:54:23.079579  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/old-k8s-version-488384/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 17:54:23.106225  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 17:54:23.133454  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0216 17:54:23.164232  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 17:54:23.193775  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0216 17:54:23.223397  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /usr/share/ca-certificates/75132.pem (1708 bytes)
	I0216 17:54:23.251545  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 17:54:23.277583  345500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem --> /usr/share/ca-certificates/7513.pem (1338 bytes)
	I0216 17:54:23.302693  345500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 17:54:23.320258  345500 ssh_runner.go:195] Run: openssl version
	I0216 17:54:23.325679  345500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75132.pem && ln -fs /usr/share/ca-certificates/75132.pem /etc/ssl/certs/75132.pem"
	I0216 17:54:23.335214  345500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75132.pem
	I0216 17:54:23.338727  345500 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:48 /usr/share/ca-certificates/75132.pem
	I0216 17:54:23.338835  345500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75132.pem
	I0216 17:54:23.345777  345500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75132.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 17:54:23.354957  345500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 17:54:23.364237  345500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:54:23.367870  345500 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:54:23.367971  345500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 17:54:23.375161  345500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 17:54:23.384183  345500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7513.pem && ln -fs /usr/share/ca-certificates/7513.pem /etc/ssl/certs/7513.pem"
	I0216 17:54:23.393887  345500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7513.pem
	I0216 17:54:23.397562  345500 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:48 /usr/share/ca-certificates/7513.pem
	I0216 17:54:23.397668  345500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7513.pem
	I0216 17:54:23.404690  345500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7513.pem /etc/ssl/certs/51391683.0"
	I0216 17:54:23.413840  345500 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 17:54:23.417400  345500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 17:54:23.424301  345500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 17:54:23.431435  345500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 17:54:23.438441  345500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 17:54:23.445517  345500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 17:54:23.452520  345500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 17:54:23.459528  345500 kubeadm.go:404] StartCluster: {Name:old-k8s-version-488384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-488384 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 17:54:23.459682  345500 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:54:23.477325  345500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 17:54:23.486506  345500 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 17:54:23.486528  345500 kubeadm.go:636] restartCluster start
	I0216 17:54:23.486588  345500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 17:54:23.495303  345500 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:23.495895  345500 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-488384" does not appear in /home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 17:54:23.496108  345500 kubeconfig.go:146] "old-k8s-version-488384" context is missing from /home/jenkins/minikube-integration/17936-2208/kubeconfig - will repair!
	I0216 17:54:23.496532  345500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/kubeconfig: {Name:mk22ab392afde309b066ab7073c4430ce25196e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 17:54:23.498042  345500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 17:54:23.506876  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:23.506991  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:23.518017  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:24.007721  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:24.007883  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:24.019371  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:24.507834  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:24.507971  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:24.518091  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:25.007474  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:25.007576  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:25.018823  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:25.507395  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:25.507479  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:25.518215  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:26.007757  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:26.007903  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:26.019243  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:26.507904  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:26.507992  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:26.517935  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:27.007606  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:27.007711  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:27.018579  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:27.507174  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:27.507263  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:27.517424  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:28.007016  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:28.007119  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:28.017938  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:28.507643  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:28.507743  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:28.517846  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:29.007415  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:29.007498  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:29.017470  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:29.507302  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:29.507404  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:29.517719  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:30.007224  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:30.007338  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:30.022630  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:30.507264  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:30.507365  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:30.517673  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:31.006987  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:31.007072  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:31.017193  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:31.507709  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:31.507794  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:31.517964  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:32.007720  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:32.007805  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:32.018010  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:32.507814  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:32.507925  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:32.518565  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:33.007137  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:33.007228  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:33.018680  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:33.507424  345500 api_server.go:166] Checking apiserver status ...
	I0216 17:54:33.507510  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 17:54:33.517733  345500 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 17:54:33.517760  345500 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0216 17:54:33.517770  345500 kubeadm.go:1135] stopping kube-system containers ...
	I0216 17:54:33.517834  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 17:54:33.539687  345500 docker.go:483] Stopping containers: [fbdc64cd1516 7bf1bf7ff8d4 a39756ae7898 174b5aa68c90]
	I0216 17:54:33.539762  345500 ssh_runner.go:195] Run: docker stop fbdc64cd1516 7bf1bf7ff8d4 a39756ae7898 174b5aa68c90
	I0216 17:54:33.558206  345500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 17:54:33.571091  345500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:54:33.580191  345500 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Feb 16 17:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb 16 17:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Feb 16 17:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Feb 16 17:48 /etc/kubernetes/scheduler.conf
	
	I0216 17:54:33.580261  345500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0216 17:54:33.589524  345500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0216 17:54:33.598442  345500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0216 17:54:33.607544  345500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0216 17:54:33.616500  345500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:54:33.625537  345500 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 17:54:33.625564  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:54:33.702771  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:54:35.294719  345500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.591853523s)
	I0216 17:54:35.294751  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:54:35.534712  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:54:35.657541  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 17:54:35.789955  345500 api_server.go:52] waiting for apiserver process to appear ...
	I0216 17:54:35.790033  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:36.290122  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:36.790159  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:37.290154  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:37.790133  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:38.290748  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:38.790745  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:39.290370  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:39.790826  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:40.290496  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:40.790909  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:41.291031  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:41.791083  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:42.290189  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:42.790265  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:43.291026  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:43.791136  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:44.290354  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:44.790176  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:45.290814  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:45.790378  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:46.290684  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:46.790945  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:47.290364  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:47.790195  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:48.291054  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:48.790203  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:49.290520  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:49.791087  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:50.290141  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:50.790696  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:51.291105  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:51.791045  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:52.290892  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:52.790492  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:53.290457  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:53.790492  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:54.290636  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:54.790699  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:55.290986  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:55.790280  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:56.291076  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:56.790344  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:57.291083  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:57.790170  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:58.290171  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:58.790699  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:59.290785  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:54:59.790526  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:00.290121  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:00.790633  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:01.290795  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:01.790750  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:02.290871  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:02.790885  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:03.290427  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:03.790842  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:04.291105  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:04.790164  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:05.290237  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:05.790811  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:06.290118  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:06.791103  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:07.290176  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:07.790924  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:08.290489  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:08.790385  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:09.290888  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:09.790968  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:10.290966  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:10.790665  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:11.290894  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:11.790441  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:12.290825  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:12.790876  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:13.290182  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:13.790772  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:14.291092  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:14.790510  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:15.290903  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:15.790367  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:16.290801  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:16.790885  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:17.290136  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:17.790160  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:18.290948  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:18.790185  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:19.290707  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:19.790174  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:20.290935  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:20.790159  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:21.290187  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:21.791099  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:22.291089  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:22.790903  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:23.290304  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:23.790422  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:24.290717  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:24.790368  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:25.290858  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:25.791027  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:26.290125  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:26.790248  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:27.290744  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:27.791099  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:28.290456  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:28.790919  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:29.291017  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:29.790169  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:30.290160  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:30.790581  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:31.290934  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:31.790169  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:32.290254  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:32.791043  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:33.290219  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:33.790492  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:34.291076  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:34.790331  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:35.291095  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:35.790181  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:55:35.809512  345500 logs.go:276] 0 containers: []
	W0216 17:55:35.809546  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:55:35.809611  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:55:35.828258  345500 logs.go:276] 0 containers: []
	W0216 17:55:35.828282  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:55:35.828345  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:55:35.845980  345500 logs.go:276] 0 containers: []
	W0216 17:55:35.846003  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:55:35.846066  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:55:35.863184  345500 logs.go:276] 0 containers: []
	W0216 17:55:35.863207  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:55:35.863270  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:55:35.881018  345500 logs.go:276] 0 containers: []
	W0216 17:55:35.881040  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:55:35.881103  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:55:35.899241  345500 logs.go:276] 0 containers: []
	W0216 17:55:35.899264  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:55:35.899325  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:55:35.917024  345500 logs.go:276] 0 containers: []
	W0216 17:55:35.917090  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:55:35.917166  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:55:35.938705  345500 logs.go:276] 0 containers: []
	W0216 17:55:35.938727  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:55:35.938738  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:55:35.938752  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:55:35.969630  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:14 old-k8s-version-488384 kubelet[1517]: E0216 17:55:14.836405    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:55:35.972418  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:15 old-k8s-version-488384 kubelet[1517]: E0216 17:55:15.836792    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:55:35.976952  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:16 old-k8s-version-488384 kubelet[1517]: E0216 17:55:16.837136    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:55:35.979742  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:17 old-k8s-version-488384 kubelet[1517]: E0216 17:55:17.847783    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:55:36.009722  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:28 old-k8s-version-488384 kubelet[1517]: E0216 17:55:28.841384    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:55:36.010424  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:28 old-k8s-version-488384 kubelet[1517]: E0216 17:55:28.844980    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:55:36.013098  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:29 old-k8s-version-488384 kubelet[1517]: E0216 17:55:29.836413    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:55:36.018545  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:31 old-k8s-version-488384 kubelet[1517]: E0216 17:55:31.837044    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:55:36.029361  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:55:36.029401  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:55:36.049974  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:55:36.050008  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:55:36.138307  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:55:36.138330  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:55:36.138348  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:55:36.161377  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:55:36.161408  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:55:36.213593  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:55:36.213656  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:55:36.213722  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:55:36.213762  345500 out.go:239]   Feb 16 17:55:17 old-k8s-version-488384 kubelet[1517]: E0216 17:55:17.847783    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:55:17 old-k8s-version-488384 kubelet[1517]: E0216 17:55:17.847783    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:55:36.213805  345500 out.go:239]   Feb 16 17:55:28 old-k8s-version-488384 kubelet[1517]: E0216 17:55:28.841384    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:55:28 old-k8s-version-488384 kubelet[1517]: E0216 17:55:28.841384    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:55:36.213842  345500 out.go:239]   Feb 16 17:55:28 old-k8s-version-488384 kubelet[1517]: E0216 17:55:28.844980    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:55:28 old-k8s-version-488384 kubelet[1517]: E0216 17:55:28.844980    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:55:36.213879  345500 out.go:239]   Feb 16 17:55:29 old-k8s-version-488384 kubelet[1517]: E0216 17:55:29.836413    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:55:29 old-k8s-version-488384 kubelet[1517]: E0216 17:55:29.836413    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:55:36.213924  345500 out.go:239]   Feb 16 17:55:31 old-k8s-version-488384 kubelet[1517]: E0216 17:55:31.837044    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:55:31 old-k8s-version-488384 kubelet[1517]: E0216 17:55:31.837044    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:55:36.213957  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:55:36.213989  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:55:46.216068  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:46.226639  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:55:46.243420  345500 logs.go:276] 0 containers: []
	W0216 17:55:46.243441  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:55:46.243501  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:55:46.266731  345500 logs.go:276] 0 containers: []
	W0216 17:55:46.266756  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:55:46.266817  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:55:46.285069  345500 logs.go:276] 0 containers: []
	W0216 17:55:46.285091  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:55:46.285150  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:55:46.302177  345500 logs.go:276] 0 containers: []
	W0216 17:55:46.302207  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:55:46.302271  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:55:46.319334  345500 logs.go:276] 0 containers: []
	W0216 17:55:46.319363  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:55:46.319424  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:55:46.340984  345500 logs.go:276] 0 containers: []
	W0216 17:55:46.341006  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:55:46.341066  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:55:46.358243  345500 logs.go:276] 0 containers: []
	W0216 17:55:46.358265  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:55:46.358332  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:55:46.374794  345500 logs.go:276] 0 containers: []
	W0216 17:55:46.374819  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:55:46.374829  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:55:46.374842  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:55:46.393216  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:55:46.393246  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:55:46.435573  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:55:46.435602  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:55:46.467929  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:28 old-k8s-version-488384 kubelet[1517]: E0216 17:55:28.841384    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:55:46.468726  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:28 old-k8s-version-488384 kubelet[1517]: E0216 17:55:28.844980    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:55:46.471448  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:29 old-k8s-version-488384 kubelet[1517]: E0216 17:55:29.836413    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:55:46.476411  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:31 old-k8s-version-488384 kubelet[1517]: E0216 17:55:31.837044    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:55:46.496031  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:40 old-k8s-version-488384 kubelet[1517]: E0216 17:55:40.836500    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:55:46.502890  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:43 old-k8s-version-488384 kubelet[1517]: E0216 17:55:43.837044    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:55:46.505886  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:44 old-k8s-version-488384 kubelet[1517]: E0216 17:55:44.836511    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:55:46.509236  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:55:46.509254  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:55:46.528045  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:55:46.528074  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:55:46.608265  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:55:46.608289  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:55:46.608302  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:55:46.608352  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:55:46.608365  345500 out.go:239]   Feb 16 17:55:29 old-k8s-version-488384 kubelet[1517]: E0216 17:55:29.836413    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:55:29 old-k8s-version-488384 kubelet[1517]: E0216 17:55:29.836413    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:55:46.608372  345500 out.go:239]   Feb 16 17:55:31 old-k8s-version-488384 kubelet[1517]: E0216 17:55:31.837044    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:55:31 old-k8s-version-488384 kubelet[1517]: E0216 17:55:31.837044    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:55:46.608380  345500 out.go:239]   Feb 16 17:55:40 old-k8s-version-488384 kubelet[1517]: E0216 17:55:40.836500    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:55:40 old-k8s-version-488384 kubelet[1517]: E0216 17:55:40.836500    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:55:46.608388  345500 out.go:239]   Feb 16 17:55:43 old-k8s-version-488384 kubelet[1517]: E0216 17:55:43.837044    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:55:43 old-k8s-version-488384 kubelet[1517]: E0216 17:55:43.837044    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:55:46.608394  345500 out.go:239]   Feb 16 17:55:44 old-k8s-version-488384 kubelet[1517]: E0216 17:55:44.836511    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:55:44 old-k8s-version-488384 kubelet[1517]: E0216 17:55:44.836511    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:55:46.608405  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:55:46.608413  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:55:56.609587  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:55:56.622541  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:55:56.642318  345500 logs.go:276] 0 containers: []
	W0216 17:55:56.642342  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:55:56.642405  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:55:56.667013  345500 logs.go:276] 0 containers: []
	W0216 17:55:56.667036  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:55:56.667097  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:55:56.686018  345500 logs.go:276] 0 containers: []
	W0216 17:55:56.686041  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:55:56.686104  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:55:56.704974  345500 logs.go:276] 0 containers: []
	W0216 17:55:56.704994  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:55:56.705054  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:55:56.721958  345500 logs.go:276] 0 containers: []
	W0216 17:55:56.721979  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:55:56.722038  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:55:56.740032  345500 logs.go:276] 0 containers: []
	W0216 17:55:56.740053  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:55:56.740114  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:55:56.757794  345500 logs.go:276] 0 containers: []
	W0216 17:55:56.757816  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:55:56.757878  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:55:56.777858  345500 logs.go:276] 0 containers: []
	W0216 17:55:56.777879  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:55:56.777890  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:55:56.777902  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:55:56.813970  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:40 old-k8s-version-488384 kubelet[1517]: E0216 17:55:40.836500    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:55:56.820780  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:43 old-k8s-version-488384 kubelet[1517]: E0216 17:55:43.837044    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:55:56.823766  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:44 old-k8s-version-488384 kubelet[1517]: E0216 17:55:44.836511    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:55:56.828623  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:46 old-k8s-version-488384 kubelet[1517]: E0216 17:55:46.837955    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:55:56.846152  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:54 old-k8s-version-488384 kubelet[1517]: E0216 17:55:54.836033    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:55:56.848871  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:55 old-k8s-version-488384 kubelet[1517]: E0216 17:55:55.836448    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:55:56.850927  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:55:56.850945  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:55:56.870291  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:55:56.870320  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:55:56.942564  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:55:56.942591  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:55:56.942605  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:55:56.961859  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:55:56.961891  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:55:57.003460  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:55:57.003489  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:55:57.003541  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:55:57.003555  345500 out.go:239]   Feb 16 17:55:43 old-k8s-version-488384 kubelet[1517]: E0216 17:55:43.837044    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:55:43 old-k8s-version-488384 kubelet[1517]: E0216 17:55:43.837044    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:55:57.003563  345500 out.go:239]   Feb 16 17:55:44 old-k8s-version-488384 kubelet[1517]: E0216 17:55:44.836511    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:55:44 old-k8s-version-488384 kubelet[1517]: E0216 17:55:44.836511    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:55:57.003576  345500 out.go:239]   Feb 16 17:55:46 old-k8s-version-488384 kubelet[1517]: E0216 17:55:46.837955    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:55:46 old-k8s-version-488384 kubelet[1517]: E0216 17:55:46.837955    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:55:57.003587  345500 out.go:239]   Feb 16 17:55:54 old-k8s-version-488384 kubelet[1517]: E0216 17:55:54.836033    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:55:54 old-k8s-version-488384 kubelet[1517]: E0216 17:55:54.836033    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:55:57.003594  345500 out.go:239]   Feb 16 17:55:55 old-k8s-version-488384 kubelet[1517]: E0216 17:55:55.836448    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:55:55 old-k8s-version-488384 kubelet[1517]: E0216 17:55:55.836448    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:55:57.003608  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:55:57.003614  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:56:07.005090  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:56:07.015691  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:56:07.033509  345500 logs.go:276] 0 containers: []
	W0216 17:56:07.033531  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:56:07.033593  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:56:07.050869  345500 logs.go:276] 0 containers: []
	W0216 17:56:07.050894  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:56:07.050954  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:56:07.067996  345500 logs.go:276] 0 containers: []
	W0216 17:56:07.068024  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:56:07.068087  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:56:07.091475  345500 logs.go:276] 0 containers: []
	W0216 17:56:07.091503  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:56:07.091563  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:56:07.109345  345500 logs.go:276] 0 containers: []
	W0216 17:56:07.109367  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:56:07.109431  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:56:07.127358  345500 logs.go:276] 0 containers: []
	W0216 17:56:07.127382  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:56:07.127444  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:56:07.145830  345500 logs.go:276] 0 containers: []
	W0216 17:56:07.145853  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:56:07.145919  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:56:07.166327  345500 logs.go:276] 0 containers: []
	W0216 17:56:07.166350  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:56:07.166365  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:56:07.166379  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:56:07.185330  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:56:07.185361  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:56:07.283740  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:56:07.283773  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:56:07.283788  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:56:07.301967  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:56:07.302002  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:56:07.345514  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:56:07.345543  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:56:07.367551  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:44 old-k8s-version-488384 kubelet[1517]: E0216 17:55:44.836511    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:07.372496  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:46 old-k8s-version-488384 kubelet[1517]: E0216 17:55:46.837955    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:07.390152  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:54 old-k8s-version-488384 kubelet[1517]: E0216 17:55:54.836033    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:07.392857  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:55 old-k8s-version-488384 kubelet[1517]: E0216 17:55:55.836448    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:07.397777  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:57 old-k8s-version-488384 kubelet[1517]: E0216 17:55:57.840003    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:07.398418  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:57 old-k8s-version-488384 kubelet[1517]: E0216 17:55:57.841129    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:07.418176  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:06 old-k8s-version-488384 kubelet[1517]: E0216 17:56:06.836807    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:56:07.419401  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:07.419422  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:56:07.419481  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:56:07.419493  345500 out.go:239]   Feb 16 17:55:54 old-k8s-version-488384 kubelet[1517]: E0216 17:55:54.836033    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:55:54 old-k8s-version-488384 kubelet[1517]: E0216 17:55:54.836033    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:07.419501  345500 out.go:239]   Feb 16 17:55:55 old-k8s-version-488384 kubelet[1517]: E0216 17:55:55.836448    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:55:55 old-k8s-version-488384 kubelet[1517]: E0216 17:55:55.836448    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:07.419508  345500 out.go:239]   Feb 16 17:55:57 old-k8s-version-488384 kubelet[1517]: E0216 17:55:57.840003    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:55:57 old-k8s-version-488384 kubelet[1517]: E0216 17:55:57.840003    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:07.419521  345500 out.go:239]   Feb 16 17:55:57 old-k8s-version-488384 kubelet[1517]: E0216 17:55:57.841129    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:55:57 old-k8s-version-488384 kubelet[1517]: E0216 17:55:57.841129    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:07.419528  345500 out.go:239]   Feb 16 17:56:06 old-k8s-version-488384 kubelet[1517]: E0216 17:56:06.836807    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:56:06 old-k8s-version-488384 kubelet[1517]: E0216 17:56:06.836807    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:56:07.419535  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:07.419541  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:56:17.421432  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:56:17.432714  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:56:17.450710  345500 logs.go:276] 0 containers: []
	W0216 17:56:17.450735  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:56:17.450796  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:56:17.470618  345500 logs.go:276] 0 containers: []
	W0216 17:56:17.470641  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:56:17.470708  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:56:17.488788  345500 logs.go:276] 0 containers: []
	W0216 17:56:17.488813  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:56:17.488898  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:56:17.506619  345500 logs.go:276] 0 containers: []
	W0216 17:56:17.506644  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:56:17.506713  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:56:17.529004  345500 logs.go:276] 0 containers: []
	W0216 17:56:17.529037  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:56:17.529107  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:56:17.547859  345500 logs.go:276] 0 containers: []
	W0216 17:56:17.547884  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:56:17.547949  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:56:17.566544  345500 logs.go:276] 0 containers: []
	W0216 17:56:17.566567  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:56:17.566640  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:56:17.585942  345500 logs.go:276] 0 containers: []
	W0216 17:56:17.585967  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:56:17.585978  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:56:17.585992  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:56:17.668182  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:56:17.668201  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:56:17.668215  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:56:17.686994  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:56:17.687024  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:56:17.732159  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:56:17.732189  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:56:17.752866  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:54 old-k8s-version-488384 kubelet[1517]: E0216 17:55:54.836033    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:17.755632  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:55 old-k8s-version-488384 kubelet[1517]: E0216 17:55:55.836448    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:17.760628  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:57 old-k8s-version-488384 kubelet[1517]: E0216 17:55:57.840003    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:17.761410  345500 logs.go:138] Found kubelet problem: Feb 16 17:55:57 old-k8s-version-488384 kubelet[1517]: E0216 17:55:57.841129    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:17.781262  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:06 old-k8s-version-488384 kubelet[1517]: E0216 17:56:06.836807    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:17.788382  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:09 old-k8s-version-488384 kubelet[1517]: E0216 17:56:09.839232    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:17.789085  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:09 old-k8s-version-488384 kubelet[1517]: E0216 17:56:09.840298    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:17.793790  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:11 old-k8s-version-488384 kubelet[1517]: E0216 17:56:11.840578    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:56:17.806300  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:56:17.806322  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:56:17.825911  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:17.825948  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:56:17.826004  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:56:17.826176  345500 out.go:239]   Feb 16 17:55:57 old-k8s-version-488384 kubelet[1517]: E0216 17:55:57.841129    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:55:57 old-k8s-version-488384 kubelet[1517]: E0216 17:55:57.841129    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:17.826187  345500 out.go:239]   Feb 16 17:56:06 old-k8s-version-488384 kubelet[1517]: E0216 17:56:06.836807    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:56:06 old-k8s-version-488384 kubelet[1517]: E0216 17:56:06.836807    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:17.826196  345500 out.go:239]   Feb 16 17:56:09 old-k8s-version-488384 kubelet[1517]: E0216 17:56:09.839232    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:56:09 old-k8s-version-488384 kubelet[1517]: E0216 17:56:09.839232    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:17.826212  345500 out.go:239]   Feb 16 17:56:09 old-k8s-version-488384 kubelet[1517]: E0216 17:56:09.840298    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:56:09 old-k8s-version-488384 kubelet[1517]: E0216 17:56:09.840298    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:17.826219  345500 out.go:239]   Feb 16 17:56:11 old-k8s-version-488384 kubelet[1517]: E0216 17:56:11.840578    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:56:11 old-k8s-version-488384 kubelet[1517]: E0216 17:56:11.840578    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:56:17.826228  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:17.826237  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:56:27.826854  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:56:27.837796  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:56:27.855466  345500 logs.go:276] 0 containers: []
	W0216 17:56:27.855489  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:56:27.855551  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:56:27.872558  345500 logs.go:276] 0 containers: []
	W0216 17:56:27.872580  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:56:27.872685  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:56:27.891096  345500 logs.go:276] 0 containers: []
	W0216 17:56:27.891117  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:56:27.891179  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:56:27.908488  345500 logs.go:276] 0 containers: []
	W0216 17:56:27.908508  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:56:27.908568  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:56:27.928008  345500 logs.go:276] 0 containers: []
	W0216 17:56:27.928029  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:56:27.928093  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:56:27.946790  345500 logs.go:276] 0 containers: []
	W0216 17:56:27.946816  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:56:27.946882  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:56:27.965094  345500 logs.go:276] 0 containers: []
	W0216 17:56:27.965116  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:56:27.965179  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:56:27.983556  345500 logs.go:276] 0 containers: []
	W0216 17:56:27.983577  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:56:27.983588  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:56:27.983601  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:56:28.031183  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:56:28.031214  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:56:28.058343  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:06 old-k8s-version-488384 kubelet[1517]: E0216 17:56:06.836807    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:28.065579  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:09 old-k8s-version-488384 kubelet[1517]: E0216 17:56:09.839232    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:28.066284  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:09 old-k8s-version-488384 kubelet[1517]: E0216 17:56:09.840298    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:28.070915  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:11 old-k8s-version-488384 kubelet[1517]: E0216 17:56:11.840578    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:28.090262  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:19 old-k8s-version-488384 kubelet[1517]: E0216 17:56:19.838384    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:28.099727  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:23 old-k8s-version-488384 kubelet[1517]: E0216 17:56:23.837592    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:28.102446  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:24 old-k8s-version-488384 kubelet[1517]: E0216 17:56:24.837421    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:28.105697  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:25 old-k8s-version-488384 kubelet[1517]: E0216 17:56:25.837506    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:56:28.110989  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:56:28.111024  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:56:28.135474  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:56:28.135508  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:56:28.255326  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:56:28.255350  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:56:28.255363  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:56:28.279109  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:28.279138  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:56:28.279189  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:56:28.279206  345500 out.go:239]   Feb 16 17:56:11 old-k8s-version-488384 kubelet[1517]: E0216 17:56:11.840578    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:56:11 old-k8s-version-488384 kubelet[1517]: E0216 17:56:11.840578    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:28.279214  345500 out.go:239]   Feb 16 17:56:19 old-k8s-version-488384 kubelet[1517]: E0216 17:56:19.838384    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:56:19 old-k8s-version-488384 kubelet[1517]: E0216 17:56:19.838384    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:28.279227  345500 out.go:239]   Feb 16 17:56:23 old-k8s-version-488384 kubelet[1517]: E0216 17:56:23.837592    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:56:23 old-k8s-version-488384 kubelet[1517]: E0216 17:56:23.837592    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:28.279236  345500 out.go:239]   Feb 16 17:56:24 old-k8s-version-488384 kubelet[1517]: E0216 17:56:24.837421    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:56:24 old-k8s-version-488384 kubelet[1517]: E0216 17:56:24.837421    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:28.279246  345500 out.go:239]   Feb 16 17:56:25 old-k8s-version-488384 kubelet[1517]: E0216 17:56:25.837506    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:56:25 old-k8s-version-488384 kubelet[1517]: E0216 17:56:25.837506    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:56:28.279253  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:28.279260  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:56:38.280523  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:56:38.292828  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:56:38.326631  345500 logs.go:276] 0 containers: []
	W0216 17:56:38.326652  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:56:38.326712  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:56:38.367908  345500 logs.go:276] 0 containers: []
	W0216 17:56:38.367929  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:56:38.367994  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:56:38.393866  345500 logs.go:276] 0 containers: []
	W0216 17:56:38.393935  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:56:38.394026  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:56:38.413805  345500 logs.go:276] 0 containers: []
	W0216 17:56:38.413827  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:56:38.413890  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:56:38.431829  345500 logs.go:276] 0 containers: []
	W0216 17:56:38.431853  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:56:38.431914  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:56:38.449855  345500 logs.go:276] 0 containers: []
	W0216 17:56:38.449877  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:56:38.449941  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:56:38.471436  345500 logs.go:276] 0 containers: []
	W0216 17:56:38.471460  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:56:38.471517  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:56:38.488698  345500 logs.go:276] 0 containers: []
	W0216 17:56:38.488721  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:56:38.488734  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:56:38.488747  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:56:38.520923  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:19 old-k8s-version-488384 kubelet[1517]: E0216 17:56:19.838384    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:38.529928  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:23 old-k8s-version-488384 kubelet[1517]: E0216 17:56:23.837592    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:38.532664  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:24 old-k8s-version-488384 kubelet[1517]: E0216 17:56:24.837421    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:38.535834  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:25 old-k8s-version-488384 kubelet[1517]: E0216 17:56:25.837506    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:38.555890  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:34 old-k8s-version-488384 kubelet[1517]: E0216 17:56:34.837229    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:38.562915  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.840568    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:38.563568  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.846085    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:56:38.564958  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:56:38.564976  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:56:38.583946  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:56:38.583975  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:56:38.666024  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:56:38.666046  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:56:38.666060  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:56:38.683841  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:56:38.683871  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:56:38.732003  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:38.732030  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:56:38.732078  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:56:38.732090  345500 out.go:239]   Feb 16 17:56:24 old-k8s-version-488384 kubelet[1517]: E0216 17:56:24.837421    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:56:24 old-k8s-version-488384 kubelet[1517]: E0216 17:56:24.837421    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:38.732113  345500 out.go:239]   Feb 16 17:56:25 old-k8s-version-488384 kubelet[1517]: E0216 17:56:25.837506    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:56:25 old-k8s-version-488384 kubelet[1517]: E0216 17:56:25.837506    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:38.732128  345500 out.go:239]   Feb 16 17:56:34 old-k8s-version-488384 kubelet[1517]: E0216 17:56:34.837229    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:56:34 old-k8s-version-488384 kubelet[1517]: E0216 17:56:34.837229    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:38.732136  345500 out.go:239]   Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.840568    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.840568    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:38.732145  345500 out.go:239]   Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.846085    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.846085    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:56:38.732151  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:38.732158  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:56:48.733610  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:56:48.744372  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:56:48.761416  345500 logs.go:276] 0 containers: []
	W0216 17:56:48.761437  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:56:48.761505  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:56:48.778875  345500 logs.go:276] 0 containers: []
	W0216 17:56:48.778897  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:56:48.778959  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:56:48.797166  345500 logs.go:276] 0 containers: []
	W0216 17:56:48.797187  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:56:48.797263  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:56:48.823778  345500 logs.go:276] 0 containers: []
	W0216 17:56:48.823800  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:56:48.823861  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:56:48.842183  345500 logs.go:276] 0 containers: []
	W0216 17:56:48.842215  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:56:48.842279  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:56:48.860244  345500 logs.go:276] 0 containers: []
	W0216 17:56:48.860264  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:56:48.860322  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:56:48.880177  345500 logs.go:276] 0 containers: []
	W0216 17:56:48.880196  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:56:48.880260  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:56:48.897730  345500 logs.go:276] 0 containers: []
	W0216 17:56:48.897798  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:56:48.897823  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:56:48.897853  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:56:48.919594  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:25 old-k8s-version-488384 kubelet[1517]: E0216 17:56:25.837506    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:48.940269  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:34 old-k8s-version-488384 kubelet[1517]: E0216 17:56:34.837229    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:48.947287  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.840568    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:48.947945  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.846085    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:48.953033  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:39 old-k8s-version-488384 kubelet[1517]: E0216 17:56:39.838202    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:48.966183  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:45 old-k8s-version-488384 kubelet[1517]: E0216 17:56:45.841341    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:56:48.972555  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:56:48.972575  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:56:48.990856  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:56:48.990888  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:56:49.065317  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:56:49.065339  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:56:49.065353  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:56:49.085124  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:56:49.085161  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:56:49.137705  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:49.137738  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:56:49.137815  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:56:49.137830  345500 out.go:239]   Feb 16 17:56:34 old-k8s-version-488384 kubelet[1517]: E0216 17:56:34.837229    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:56:34 old-k8s-version-488384 kubelet[1517]: E0216 17:56:34.837229    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:49.137866  345500 out.go:239]   Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.840568    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.840568    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:49.137882  345500 out.go:239]   Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.846085    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.846085    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:49.137891  345500 out.go:239]   Feb 16 17:56:39 old-k8s-version-488384 kubelet[1517]: E0216 17:56:39.838202    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:56:39 old-k8s-version-488384 kubelet[1517]: E0216 17:56:39.838202    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:49.137905  345500 out.go:239]   Feb 16 17:56:45 old-k8s-version-488384 kubelet[1517]: E0216 17:56:45.841341    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:56:45 old-k8s-version-488384 kubelet[1517]: E0216 17:56:45.841341    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:56:49.137912  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:49.137941  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:56:59.138858  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:56:59.149492  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:56:59.167577  345500 logs.go:276] 0 containers: []
	W0216 17:56:59.167601  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:56:59.167659  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:56:59.184599  345500 logs.go:276] 0 containers: []
	W0216 17:56:59.184620  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:56:59.184772  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:56:59.202252  345500 logs.go:276] 0 containers: []
	W0216 17:56:59.202277  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:56:59.202339  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:56:59.219374  345500 logs.go:276] 0 containers: []
	W0216 17:56:59.219396  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:56:59.219458  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:56:59.236188  345500 logs.go:276] 0 containers: []
	W0216 17:56:59.236208  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:56:59.236293  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:56:59.253863  345500 logs.go:276] 0 containers: []
	W0216 17:56:59.253885  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:56:59.253945  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:56:59.277212  345500 logs.go:276] 0 containers: []
	W0216 17:56:59.277285  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:56:59.277360  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:56:59.294196  345500 logs.go:276] 0 containers: []
	W0216 17:56:59.294219  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:56:59.294230  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:56:59.294242  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:56:59.320021  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.840568    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:59.320837  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:37 old-k8s-version-488384 kubelet[1517]: E0216 17:56:37.846085    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:59.326029  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:39 old-k8s-version-488384 kubelet[1517]: E0216 17:56:39.838202    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:59.339370  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:45 old-k8s-version-488384 kubelet[1517]: E0216 17:56:45.841341    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:59.350749  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:50 old-k8s-version-488384 kubelet[1517]: E0216 17:56:50.842768    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:59.357038  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:52 old-k8s-version-488384 kubelet[1517]: E0216 17:56:52.847444    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:59.357680  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:52 old-k8s-version-488384 kubelet[1517]: E0216 17:56:52.851077    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:59.368749  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:57 old-k8s-version-488384 kubelet[1517]: E0216 17:56:57.836573    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:56:59.371671  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:56:59.371692  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:56:59.391022  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:56:59.391052  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:56:59.464890  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:56:59.464961  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:56:59.464991  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:56:59.483478  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:56:59.483509  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:56:59.528005  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:59.528031  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:56:59.528079  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:56:59.528090  345500 out.go:239]   Feb 16 17:56:45 old-k8s-version-488384 kubelet[1517]: E0216 17:56:45.841341    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:56:45 old-k8s-version-488384 kubelet[1517]: E0216 17:56:45.841341    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:56:59.528106  345500 out.go:239]   Feb 16 17:56:50 old-k8s-version-488384 kubelet[1517]: E0216 17:56:50.842768    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:56:50 old-k8s-version-488384 kubelet[1517]: E0216 17:56:50.842768    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:56:59.528115  345500 out.go:239]   Feb 16 17:56:52 old-k8s-version-488384 kubelet[1517]: E0216 17:56:52.847444    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:56:52 old-k8s-version-488384 kubelet[1517]: E0216 17:56:52.847444    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:56:59.528128  345500 out.go:239]   Feb 16 17:56:52 old-k8s-version-488384 kubelet[1517]: E0216 17:56:52.851077    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:56:52 old-k8s-version-488384 kubelet[1517]: E0216 17:56:52.851077    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:56:59.528134  345500 out.go:239]   Feb 16 17:56:57 old-k8s-version-488384 kubelet[1517]: E0216 17:56:57.836573    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:56:57 old-k8s-version-488384 kubelet[1517]: E0216 17:56:57.836573    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:56:59.528141  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:56:59.528154  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:57:09.529763  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:57:09.540382  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:57:09.556800  345500 logs.go:276] 0 containers: []
	W0216 17:57:09.556825  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:57:09.556883  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:57:09.573476  345500 logs.go:276] 0 containers: []
	W0216 17:57:09.573499  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:57:09.573555  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:57:09.589482  345500 logs.go:276] 0 containers: []
	W0216 17:57:09.589504  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:57:09.589569  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:57:09.606527  345500 logs.go:276] 0 containers: []
	W0216 17:57:09.606553  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:57:09.606622  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:57:09.624819  345500 logs.go:276] 0 containers: []
	W0216 17:57:09.624841  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:57:09.624900  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:57:09.642268  345500 logs.go:276] 0 containers: []
	W0216 17:57:09.642289  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:57:09.642347  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:57:09.666666  345500 logs.go:276] 0 containers: []
	W0216 17:57:09.666688  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:57:09.666752  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:57:09.684081  345500 logs.go:276] 0 containers: []
	W0216 17:57:09.684102  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:57:09.684112  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:57:09.684126  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:57:09.755937  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:57:09.755955  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:57:09.755967  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:57:09.774424  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:57:09.774457  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:57:09.815914  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:57:09.815952  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:57:09.854097  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:50 old-k8s-version-488384 kubelet[1517]: E0216 17:56:50.842768    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:09.859181  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:52 old-k8s-version-488384 kubelet[1517]: E0216 17:56:52.847444    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:09.859824  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:52 old-k8s-version-488384 kubelet[1517]: E0216 17:56:52.851077    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:09.871138  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:57 old-k8s-version-488384 kubelet[1517]: E0216 17:56:57.836573    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:09.882476  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:02 old-k8s-version-488384 kubelet[1517]: E0216 17:57:02.836561    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:09.889799  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:05 old-k8s-version-488384 kubelet[1517]: E0216 17:57:05.843129    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:09.889990  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:05 old-k8s-version-488384 kubelet[1517]: E0216 17:57:05.843817    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:57:09.899004  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:57:09.899031  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:57:09.917386  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:57:09.917453  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:57:09.917512  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:57:09.917522  345500 out.go:239]   Feb 16 17:56:52 old-k8s-version-488384 kubelet[1517]: E0216 17:56:52.851077    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:56:52 old-k8s-version-488384 kubelet[1517]: E0216 17:56:52.851077    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:09.917532  345500 out.go:239]   Feb 16 17:56:57 old-k8s-version-488384 kubelet[1517]: E0216 17:56:57.836573    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:56:57 old-k8s-version-488384 kubelet[1517]: E0216 17:56:57.836573    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:09.917544  345500 out.go:239]   Feb 16 17:57:02 old-k8s-version-488384 kubelet[1517]: E0216 17:57:02.836561    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:57:02 old-k8s-version-488384 kubelet[1517]: E0216 17:57:02.836561    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:09.917552  345500 out.go:239]   Feb 16 17:57:05 old-k8s-version-488384 kubelet[1517]: E0216 17:57:05.843129    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:57:05 old-k8s-version-488384 kubelet[1517]: E0216 17:57:05.843129    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:09.917558  345500 out.go:239]   Feb 16 17:57:05 old-k8s-version-488384 kubelet[1517]: E0216 17:57:05.843817    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:57:05 old-k8s-version-488384 kubelet[1517]: E0216 17:57:05.843817    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:57:09.917570  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:57:09.917577  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:57:19.918143  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:57:19.929166  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:57:19.946191  345500 logs.go:276] 0 containers: []
	W0216 17:57:19.946214  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:57:19.946282  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:57:19.963346  345500 logs.go:276] 0 containers: []
	W0216 17:57:19.963370  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:57:19.963430  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:57:19.981090  345500 logs.go:276] 0 containers: []
	W0216 17:57:19.981115  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:57:19.981173  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:57:19.998531  345500 logs.go:276] 0 containers: []
	W0216 17:57:19.998555  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:57:19.998619  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:57:20.017184  345500 logs.go:276] 0 containers: []
	W0216 17:57:20.017212  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:57:20.017277  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:57:20.036156  345500 logs.go:276] 0 containers: []
	W0216 17:57:20.036178  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:57:20.036242  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:57:20.054163  345500 logs.go:276] 0 containers: []
	W0216 17:57:20.054190  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:57:20.054255  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:57:20.074030  345500 logs.go:276] 0 containers: []
	W0216 17:57:20.074052  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:57:20.074063  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:57:20.074075  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:57:20.098499  345500 logs.go:138] Found kubelet problem: Feb 16 17:56:57 old-k8s-version-488384 kubelet[1517]: E0216 17:56:57.836573    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:20.109878  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:02 old-k8s-version-488384 kubelet[1517]: E0216 17:57:02.836561    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:20.117165  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:05 old-k8s-version-488384 kubelet[1517]: E0216 17:57:05.843129    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:20.117355  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:05 old-k8s-version-488384 kubelet[1517]: E0216 17:57:05.843817    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:20.126859  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:09 old-k8s-version-488384 kubelet[1517]: E0216 17:57:09.840072    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:20.142785  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:15 old-k8s-version-488384 kubelet[1517]: E0216 17:57:15.835554    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:20.147833  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.843738    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:20.148474  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.844834    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:57:20.153138  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:57:20.153161  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:57:20.171789  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:57:20.171819  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:57:20.243486  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:57:20.243507  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:57:20.243519  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:57:20.262537  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:57:20.262567  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:57:20.310079  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:57:20.310108  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:57:20.310162  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:57:20.310177  345500 out.go:239]   Feb 16 17:57:05 old-k8s-version-488384 kubelet[1517]: E0216 17:57:05.843817    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:57:05 old-k8s-version-488384 kubelet[1517]: E0216 17:57:05.843817    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:20.310187  345500 out.go:239]   Feb 16 17:57:09 old-k8s-version-488384 kubelet[1517]: E0216 17:57:09.840072    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:57:09 old-k8s-version-488384 kubelet[1517]: E0216 17:57:09.840072    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:20.310204  345500 out.go:239]   Feb 16 17:57:15 old-k8s-version-488384 kubelet[1517]: E0216 17:57:15.835554    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:57:15 old-k8s-version-488384 kubelet[1517]: E0216 17:57:15.835554    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:20.310212  345500 out.go:239]   Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.843738    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.843738    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:20.310222  345500 out.go:239]   Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.844834    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.844834    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:57:20.310228  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:57:20.310235  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:57:30.311212  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:57:30.322783  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:57:30.340019  345500 logs.go:276] 0 containers: []
	W0216 17:57:30.340038  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:57:30.340096  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:57:30.359202  345500 logs.go:276] 0 containers: []
	W0216 17:57:30.359229  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:57:30.359299  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:57:30.378761  345500 logs.go:276] 0 containers: []
	W0216 17:57:30.378787  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:57:30.378846  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:57:30.395456  345500 logs.go:276] 0 containers: []
	W0216 17:57:30.395481  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:57:30.395538  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:57:30.411988  345500 logs.go:276] 0 containers: []
	W0216 17:57:30.412014  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:57:30.412084  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:57:30.428394  345500 logs.go:276] 0 containers: []
	W0216 17:57:30.428420  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:57:30.428477  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:57:30.454323  345500 logs.go:276] 0 containers: []
	W0216 17:57:30.454349  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:57:30.454411  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:57:30.476313  345500 logs.go:276] 0 containers: []
	W0216 17:57:30.476334  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:57:30.476345  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:57:30.476356  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:57:30.550580  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:57:30.550601  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:57:30.550614  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:57:30.567744  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:57:30.567772  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:57:30.612773  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:57:30.612803  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:57:30.639539  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:09 old-k8s-version-488384 kubelet[1517]: E0216 17:57:09.840072    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:30.653345  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:15 old-k8s-version-488384 kubelet[1517]: E0216 17:57:15.835554    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:30.659763  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.843738    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:30.660416  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.844834    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:30.673787  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:23 old-k8s-version-488384 kubelet[1517]: E0216 17:57:23.836150    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:30.682672  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:27 old-k8s-version-488384 kubelet[1517]: E0216 17:57:27.844041    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:30.685606  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:28 old-k8s-version-488384 kubelet[1517]: E0216 17:57:28.837265    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:30.688790  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:29 old-k8s-version-488384 kubelet[1517]: E0216 17:57:29.835141    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:57:30.690396  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:57:30.690414  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:57:30.710033  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:57:30.710060  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:57:30.710103  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:57:30.710120  345500 out.go:239]   Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.844834    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.844834    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:30.710128  345500 out.go:239]   Feb 16 17:57:23 old-k8s-version-488384 kubelet[1517]: E0216 17:57:23.836150    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:57:23 old-k8s-version-488384 kubelet[1517]: E0216 17:57:23.836150    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:30.710142  345500 out.go:239]   Feb 16 17:57:27 old-k8s-version-488384 kubelet[1517]: E0216 17:57:27.844041    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:57:27 old-k8s-version-488384 kubelet[1517]: E0216 17:57:27.844041    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:30.710150  345500 out.go:239]   Feb 16 17:57:28 old-k8s-version-488384 kubelet[1517]: E0216 17:57:28.837265    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:57:28 old-k8s-version-488384 kubelet[1517]: E0216 17:57:28.837265    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:30.710160  345500 out.go:239]   Feb 16 17:57:29 old-k8s-version-488384 kubelet[1517]: E0216 17:57:29.835141    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:57:29 old-k8s-version-488384 kubelet[1517]: E0216 17:57:29.835141    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 17:57:30.710166  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:57:30.710178  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:57:40.711899  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:57:40.722115  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:57:40.739786  345500 logs.go:276] 0 containers: []
	W0216 17:57:40.739806  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:57:40.739862  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:57:40.759343  345500 logs.go:276] 0 containers: []
	W0216 17:57:40.759363  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:57:40.759460  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:57:40.776469  345500 logs.go:276] 0 containers: []
	W0216 17:57:40.776489  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:57:40.776546  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:57:40.792752  345500 logs.go:276] 0 containers: []
	W0216 17:57:40.792776  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:57:40.792839  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:57:40.809125  345500 logs.go:276] 0 containers: []
	W0216 17:57:40.809147  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:57:40.809210  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:57:40.835055  345500 logs.go:276] 0 containers: []
	W0216 17:57:40.835077  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:57:40.835133  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:57:40.853529  345500 logs.go:276] 0 containers: []
	W0216 17:57:40.853600  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:57:40.853672  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:57:40.874706  345500 logs.go:276] 0 containers: []
	W0216 17:57:40.874726  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:57:40.874736  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:57:40.874750  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:57:40.892560  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:57:40.892627  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:57:40.963851  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:57:40.963915  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:57:40.963942  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:57:40.983340  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:57:40.983377  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:57:41.024050  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:57:41.024123  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:57:41.046822  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.843738    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:41.047469  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:17 old-k8s-version-488384 kubelet[1517]: E0216 17:57:17.844834    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:41.060883  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:23 old-k8s-version-488384 kubelet[1517]: E0216 17:57:23.836150    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:41.069713  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:27 old-k8s-version-488384 kubelet[1517]: E0216 17:57:27.844041    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:41.072617  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:28 old-k8s-version-488384 kubelet[1517]: E0216 17:57:28.837265    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:41.075824  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:29 old-k8s-version-488384 kubelet[1517]: E0216 17:57:29.835141    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:41.091059  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:36 old-k8s-version-488384 kubelet[1517]: E0216 17:57:36.844130    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:41.099968  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:40 old-k8s-version-488384 kubelet[1517]: E0216 17:57:40.840260    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:57:41.100378  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:57:41.100392  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:57:41.100444  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:57:41.100453  345500 out.go:239]   Feb 16 17:57:27 old-k8s-version-488384 kubelet[1517]: E0216 17:57:27.844041    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:57:27 old-k8s-version-488384 kubelet[1517]: E0216 17:57:27.844041    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:41.100462  345500 out.go:239]   Feb 16 17:57:28 old-k8s-version-488384 kubelet[1517]: E0216 17:57:28.837265    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:57:28 old-k8s-version-488384 kubelet[1517]: E0216 17:57:28.837265    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:41.100469  345500 out.go:239]   Feb 16 17:57:29 old-k8s-version-488384 kubelet[1517]: E0216 17:57:29.835141    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:57:29 old-k8s-version-488384 kubelet[1517]: E0216 17:57:29.835141    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:41.100476  345500 out.go:239]   Feb 16 17:57:36 old-k8s-version-488384 kubelet[1517]: E0216 17:57:36.844130    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:57:36 old-k8s-version-488384 kubelet[1517]: E0216 17:57:36.844130    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:41.100484  345500 out.go:239]   Feb 16 17:57:40 old-k8s-version-488384 kubelet[1517]: E0216 17:57:40.840260    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:57:40 old-k8s-version-488384 kubelet[1517]: E0216 17:57:40.840260    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:57:41.100497  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:57:41.100503  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:57:51.101531  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:57:51.113497  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:57:51.131571  345500 logs.go:276] 0 containers: []
	W0216 17:57:51.131606  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:57:51.131673  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:57:51.151419  345500 logs.go:276] 0 containers: []
	W0216 17:57:51.151445  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:57:51.151510  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:57:51.170884  345500 logs.go:276] 0 containers: []
	W0216 17:57:51.170910  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:57:51.170972  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:57:51.187614  345500 logs.go:276] 0 containers: []
	W0216 17:57:51.187635  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:57:51.187695  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:57:51.204325  345500 logs.go:276] 0 containers: []
	W0216 17:57:51.204348  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:57:51.204406  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:57:51.222034  345500 logs.go:276] 0 containers: []
	W0216 17:57:51.222056  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:57:51.222124  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:57:51.239521  345500 logs.go:276] 0 containers: []
	W0216 17:57:51.239541  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:57:51.239611  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:57:51.256471  345500 logs.go:276] 0 containers: []
	W0216 17:57:51.256493  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:57:51.256503  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:57:51.256517  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:57:51.332463  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:57:51.332485  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:57:51.332499  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:57:51.350420  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:57:51.350449  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:57:51.393344  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:57:51.393410  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:57:51.421555  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:28 old-k8s-version-488384 kubelet[1517]: E0216 17:57:28.837265    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:51.424773  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:29 old-k8s-version-488384 kubelet[1517]: E0216 17:57:29.835141    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:51.440062  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:36 old-k8s-version-488384 kubelet[1517]: E0216 17:57:36.844130    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:51.449000  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:40 old-k8s-version-488384 kubelet[1517]: E0216 17:57:40.840260    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:51.453949  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:42 old-k8s-version-488384 kubelet[1517]: E0216 17:57:42.838974    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:51.454682  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:42 old-k8s-version-488384 kubelet[1517]: E0216 17:57:42.840084    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:51.472270  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:50 old-k8s-version-488384 kubelet[1517]: E0216 17:57:50.836456    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:57:51.473465  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:57:51.473485  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:57:51.491439  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:57:51.491463  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:57:51.491508  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:57:51.491523  345500 out.go:239]   Feb 16 17:57:36 old-k8s-version-488384 kubelet[1517]: E0216 17:57:36.844130    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:57:36 old-k8s-version-488384 kubelet[1517]: E0216 17:57:36.844130    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:57:51.491531  345500 out.go:239]   Feb 16 17:57:40 old-k8s-version-488384 kubelet[1517]: E0216 17:57:40.840260    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:57:40 old-k8s-version-488384 kubelet[1517]: E0216 17:57:40.840260    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:57:51.491543  345500 out.go:239]   Feb 16 17:57:42 old-k8s-version-488384 kubelet[1517]: E0216 17:57:42.838974    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:57:42 old-k8s-version-488384 kubelet[1517]: E0216 17:57:42.838974    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:57:51.491552  345500 out.go:239]   Feb 16 17:57:42 old-k8s-version-488384 kubelet[1517]: E0216 17:57:42.840084    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:57:42 old-k8s-version-488384 kubelet[1517]: E0216 17:57:42.840084    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:57:51.491565  345500 out.go:239]   Feb 16 17:57:50 old-k8s-version-488384 kubelet[1517]: E0216 17:57:50.836456    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:57:50 old-k8s-version-488384 kubelet[1517]: E0216 17:57:50.836456    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 17:57:51.491572  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:57:51.491583  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:58:01.493452  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:58:01.504091  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:58:01.520615  345500 logs.go:276] 0 containers: []
	W0216 17:58:01.520673  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:58:01.520733  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:58:01.537296  345500 logs.go:276] 0 containers: []
	W0216 17:58:01.537319  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:58:01.537390  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:58:01.553527  345500 logs.go:276] 0 containers: []
	W0216 17:58:01.553551  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:58:01.553645  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:58:01.571249  345500 logs.go:276] 0 containers: []
	W0216 17:58:01.571275  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:58:01.571333  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:58:01.587410  345500 logs.go:276] 0 containers: []
	W0216 17:58:01.587430  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:58:01.587490  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:58:01.604427  345500 logs.go:276] 0 containers: []
	W0216 17:58:01.604451  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:58:01.604515  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:58:01.627291  345500 logs.go:276] 0 containers: []
	W0216 17:58:01.627318  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:58:01.627377  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:58:01.644747  345500 logs.go:276] 0 containers: []
	W0216 17:58:01.644772  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:58:01.644782  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:58:01.644795  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:58:01.668203  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:58:01.668248  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:58:01.739920  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:58:01.739941  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:58:01.739954  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:58:01.757526  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:58:01.757560  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:58:01.802688  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:58:01.802716  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:58:01.833416  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:40 old-k8s-version-488384 kubelet[1517]: E0216 17:57:40.840260    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:58:01.840031  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:42 old-k8s-version-488384 kubelet[1517]: E0216 17:57:42.838974    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:58:01.840796  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:42 old-k8s-version-488384 kubelet[1517]: E0216 17:57:42.840084    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:01.860160  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:50 old-k8s-version-488384 kubelet[1517]: E0216 17:57:50.836456    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:01.867407  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:53 old-k8s-version-488384 kubelet[1517]: E0216 17:57:53.844873    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:58:01.868111  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:53 old-k8s-version-488384 kubelet[1517]: E0216 17:57:53.845690    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:01.875312  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:56 old-k8s-version-488384 kubelet[1517]: E0216 17:57:56.835224    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:58:01.887730  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:58:01.887754  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:58:01.887810  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:58:01.887819  345500 out.go:239]   Feb 16 17:57:42 old-k8s-version-488384 kubelet[1517]: E0216 17:57:42.840084    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:57:42 old-k8s-version-488384 kubelet[1517]: E0216 17:57:42.840084    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:01.887827  345500 out.go:239]   Feb 16 17:57:50 old-k8s-version-488384 kubelet[1517]: E0216 17:57:50.836456    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:57:50 old-k8s-version-488384 kubelet[1517]: E0216 17:57:50.836456    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:01.887837  345500 out.go:239]   Feb 16 17:57:53 old-k8s-version-488384 kubelet[1517]: E0216 17:57:53.844873    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:57:53 old-k8s-version-488384 kubelet[1517]: E0216 17:57:53.844873    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:58:01.887843  345500 out.go:239]   Feb 16 17:57:53 old-k8s-version-488384 kubelet[1517]: E0216 17:57:53.845690    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:57:53 old-k8s-version-488384 kubelet[1517]: E0216 17:57:53.845690    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:01.887849  345500 out.go:239]   Feb 16 17:57:56 old-k8s-version-488384 kubelet[1517]: E0216 17:57:56.835224    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:57:56 old-k8s-version-488384 kubelet[1517]: E0216 17:57:56.835224    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:58:01.887859  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:58:01.887865  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:58:11.889289  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:58:11.902087  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:58:11.928567  345500 logs.go:276] 0 containers: []
	W0216 17:58:11.928592  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:58:11.928682  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:58:11.946090  345500 logs.go:276] 0 containers: []
	W0216 17:58:11.946114  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:58:11.946183  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:58:11.962216  345500 logs.go:276] 0 containers: []
	W0216 17:58:11.962237  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:58:11.962295  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:58:11.978641  345500 logs.go:276] 0 containers: []
	W0216 17:58:11.978662  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:58:11.978720  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:58:12.009769  345500 logs.go:276] 0 containers: []
	W0216 17:58:12.009790  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:58:12.009849  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:58:12.035955  345500 logs.go:276] 0 containers: []
	W0216 17:58:12.035979  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:58:12.036036  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:58:12.059678  345500 logs.go:276] 0 containers: []
	W0216 17:58:12.059706  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:58:12.059802  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:58:12.086914  345500 logs.go:276] 0 containers: []
	W0216 17:58:12.086944  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:58:12.086958  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:58:12.086971  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:58:12.115897  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:58:12.115926  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:58:12.222559  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:58:12.222581  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:58:12.222596  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:58:12.244043  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:58:12.244074  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:58:12.306798  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:58:12.306832  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:58:12.344363  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:50 old-k8s-version-488384 kubelet[1517]: E0216 17:57:50.836456    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:12.351317  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:53 old-k8s-version-488384 kubelet[1517]: E0216 17:57:53.844873    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:58:12.351960  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:53 old-k8s-version-488384 kubelet[1517]: E0216 17:57:53.845690    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:12.358987  345500 logs.go:138] Found kubelet problem: Feb 16 17:57:56 old-k8s-version-488384 kubelet[1517]: E0216 17:57:56.835224    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:58:12.373408  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:01 old-k8s-version-488384 kubelet[1517]: E0216 17:58:01.847474    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:12.380914  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:04 old-k8s-version-488384 kubelet[1517]: E0216 17:58:04.835625    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:58:12.389957  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.849607    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:12.390582  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.854028    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:58:12.397903  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:58:12.397922  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:58:12.397984  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:58:12.398000  345500 out.go:239]   Feb 16 17:57:56 old-k8s-version-488384 kubelet[1517]: E0216 17:57:56.835224    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:57:56 old-k8s-version-488384 kubelet[1517]: E0216 17:57:56.835224    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:58:12.398008  345500 out.go:239]   Feb 16 17:58:01 old-k8s-version-488384 kubelet[1517]: E0216 17:58:01.847474    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:58:01 old-k8s-version-488384 kubelet[1517]: E0216 17:58:01.847474    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:12.398015  345500 out.go:239]   Feb 16 17:58:04 old-k8s-version-488384 kubelet[1517]: E0216 17:58:04.835625    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:58:04 old-k8s-version-488384 kubelet[1517]: E0216 17:58:04.835625    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:58:12.398025  345500 out.go:239]   Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.849607    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.849607    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:12.398212  345500 out.go:239]   Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.854028    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.854028    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0216 17:58:12.398221  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:58:12.398231  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:58:22.400078  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:58:22.410567  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:58:22.431064  345500 logs.go:276] 0 containers: []
	W0216 17:58:22.431086  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:58:22.431144  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:58:22.447024  345500 logs.go:276] 0 containers: []
	W0216 17:58:22.447045  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:58:22.447103  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:58:22.463770  345500 logs.go:276] 0 containers: []
	W0216 17:58:22.463791  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:58:22.463853  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:58:22.482404  345500 logs.go:276] 0 containers: []
	W0216 17:58:22.482425  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:58:22.482490  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:58:22.498768  345500 logs.go:276] 0 containers: []
	W0216 17:58:22.498790  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:58:22.498871  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:58:22.515176  345500 logs.go:276] 0 containers: []
	W0216 17:58:22.515198  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:58:22.515257  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:58:22.531144  345500 logs.go:276] 0 containers: []
	W0216 17:58:22.531166  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:58:22.531223  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:58:22.548090  345500 logs.go:276] 0 containers: []
	W0216 17:58:22.548112  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:58:22.548122  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:58:22.548140  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:58:22.567000  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:58:22.567078  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:58:22.657997  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:58:22.658070  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:58:22.658133  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:58:22.684616  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:58:22.684661  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:58:22.728171  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:58:22.728209  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:58:22.756780  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:01 old-k8s-version-488384 kubelet[1517]: E0216 17:58:01.847474    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:22.764221  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:04 old-k8s-version-488384 kubelet[1517]: E0216 17:58:04.835625    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:58:22.773368  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.849607    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:22.773979  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.854028    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:58:22.783146  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:12 old-k8s-version-488384 kubelet[1517]: E0216 17:58:12.846855    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:22.790093  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:15 old-k8s-version-488384 kubelet[1517]: E0216 17:58:15.840050    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:58:22.805764  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:58:22.805792  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:58:22.805840  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:58:22.805850  345500 out.go:239]   Feb 16 17:58:04 old-k8s-version-488384 kubelet[1517]: E0216 17:58:04.835625    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:58:04 old-k8s-version-488384 kubelet[1517]: E0216 17:58:04.835625    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:58:22.805863  345500 out.go:239]   Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.849607    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.849607    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:22.805870  345500 out.go:239]   Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.854028    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:58:08 old-k8s-version-488384 kubelet[1517]: E0216 17:58:08.854028    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:58:22.805877  345500 out.go:239]   Feb 16 17:58:12 old-k8s-version-488384 kubelet[1517]: E0216 17:58:12.846855    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:58:12 old-k8s-version-488384 kubelet[1517]: E0216 17:58:12.846855    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:22.805890  345500 out.go:239]   Feb 16 17:58:15 old-k8s-version-488384 kubelet[1517]: E0216 17:58:15.840050    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:58:15 old-k8s-version-488384 kubelet[1517]: E0216 17:58:15.840050    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:58:22.805900  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:58:22.805906  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:58:32.806719  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:58:32.817700  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 17:58:32.836599  345500 logs.go:276] 0 containers: []
	W0216 17:58:32.836623  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 17:58:32.836707  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 17:58:32.852690  345500 logs.go:276] 0 containers: []
	W0216 17:58:32.852713  345500 logs.go:278] No container was found matching "etcd"
	I0216 17:58:32.852771  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 17:58:32.869215  345500 logs.go:276] 0 containers: []
	W0216 17:58:32.869238  345500 logs.go:278] No container was found matching "coredns"
	I0216 17:58:32.869295  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 17:58:32.888425  345500 logs.go:276] 0 containers: []
	W0216 17:58:32.888448  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 17:58:32.888509  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 17:58:32.910355  345500 logs.go:276] 0 containers: []
	W0216 17:58:32.910377  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 17:58:32.910440  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 17:58:32.938459  345500 logs.go:276] 0 containers: []
	W0216 17:58:32.938486  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 17:58:32.938545  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 17:58:32.956313  345500 logs.go:276] 0 containers: []
	W0216 17:58:32.956335  345500 logs.go:278] No container was found matching "kindnet"
	I0216 17:58:32.956391  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 17:58:32.975286  345500 logs.go:276] 0 containers: []
	W0216 17:58:32.975308  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 17:58:32.975318  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 17:58:32.975331  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 17:58:33.017909  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:12 old-k8s-version-488384 kubelet[1517]: E0216 17:58:12.846855    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:33.025991  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:15 old-k8s-version-488384 kubelet[1517]: E0216 17:58:15.840050    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:58:33.044471  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:22 old-k8s-version-488384 kubelet[1517]: E0216 17:58:22.836096    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:58:33.047417  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:23 old-k8s-version-488384 kubelet[1517]: E0216 17:58:23.840021    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:33.054831  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:26 old-k8s-version-488384 kubelet[1517]: E0216 17:58:26.835437    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:33.057610  345500 logs.go:138] Found kubelet problem: Feb 16 17:58:27 old-k8s-version-488384 kubelet[1517]: E0216 17:58:27.836041    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:58:33.068284  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 17:58:33.068308  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 17:58:33.086952  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 17:58:33.086983  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 17:58:33.171209  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 17:58:33.171233  345500 logs.go:123] Gathering logs for Docker ...
	I0216 17:58:33.171247  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 17:58:33.191392  345500 logs.go:123] Gathering logs for container status ...
	I0216 17:58:33.191461  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 17:58:33.241391  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:58:33.241418  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0216 17:58:33.241487  345500 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0216 17:58:33.241503  345500 out.go:239]   Feb 16 17:58:15 old-k8s-version-488384 kubelet[1517]: E0216 17:58:15.840050    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:58:15 old-k8s-version-488384 kubelet[1517]: E0216 17:58:15.840050    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 17:58:33.241511  345500 out.go:239]   Feb 16 17:58:22 old-k8s-version-488384 kubelet[1517]: E0216 17:58:22.836096    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 16 17:58:22 old-k8s-version-488384 kubelet[1517]: E0216 17:58:22.836096    1517 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 17:58:33.241549  345500 out.go:239]   Feb 16 17:58:23 old-k8s-version-488384 kubelet[1517]: E0216 17:58:23.840021    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 16 17:58:23 old-k8s-version-488384 kubelet[1517]: E0216 17:58:23.840021    1517 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 17:58:33.241564  345500 out.go:239]   Feb 16 17:58:26 old-k8s-version-488384 kubelet[1517]: E0216 17:58:26.835437    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 16 17:58:26 old-k8s-version-488384 kubelet[1517]: E0216 17:58:26.835437    1517 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 17:58:33.241572  345500 out.go:239]   Feb 16 17:58:27 old-k8s-version-488384 kubelet[1517]: E0216 17:58:27.836041    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 16 17:58:27 old-k8s-version-488384 kubelet[1517]: E0216 17:58:27.836041    1517 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 17:58:33.241578  345500 out.go:304] Setting ErrFile to fd 2...
	I0216 17:58:33.241589  345500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:58:43.242540  345500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:58:43.254107  345500 kubeadm.go:640] restartCluster took 4m19.767567604s
	W0216 17:58:43.254167  345500 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0216 17:58:43.254202  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 17:58:44.433942  345500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.179717899s)
	I0216 17:58:44.434014  345500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:58:44.454369  345500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 17:58:44.466618  345500 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 17:58:44.466692  345500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 17:58:44.482709  345500 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 17:58:44.482755  345500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 17:58:44.590200  345500 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 17:58:44.590268  345500 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 17:58:44.876565  345500 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 17:58:44.876653  345500 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 17:58:44.876704  345500 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 17:58:44.876743  345500 kubeadm.go:322] OS: Linux
	I0216 17:58:44.876789  345500 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 17:58:44.876837  345500 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 17:58:44.876884  345500 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 17:58:44.876930  345500 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 17:58:44.876976  345500 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 17:58:44.877019  345500 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 17:58:44.993433  345500 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 17:58:44.993540  345500 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 17:58:44.993628  345500 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 17:58:45.292605  345500 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 17:58:45.299052  345500 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 17:58:45.312862  345500 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 17:58:45.439390  345500 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 17:58:45.441820  345500 out.go:204]   - Generating certificates and keys ...
	I0216 17:58:45.441983  345500 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 17:58:45.442068  345500 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 17:58:45.442165  345500 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 17:58:45.442255  345500 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 17:58:45.442451  345500 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 17:58:45.449541  345500 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 17:58:45.449759  345500 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 17:58:45.451092  345500 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 17:58:45.451170  345500 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 17:58:45.453622  345500 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 17:58:45.453670  345500 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 17:58:45.453723  345500 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 17:58:47.656704  345500 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 17:58:48.779551  345500 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 17:58:49.864693  345500 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 17:58:50.194764  345500 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 17:58:50.195973  345500 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 17:58:50.200503  345500 out.go:204]   - Booting up control plane ...
	I0216 17:58:50.200609  345500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 17:58:50.207875  345500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 17:58:50.210152  345500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 17:58:50.211700  345500 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 17:58:50.215791  345500 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 17:59:30.217437  345500 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 18:02:50.221806  345500 kubeadm.go:322] 
	I0216 18:02:50.221879  345500 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 18:02:50.221922  345500 kubeadm.go:322] 	timed out waiting for the condition
	I0216 18:02:50.221933  345500 kubeadm.go:322] 
	I0216 18:02:50.221965  345500 kubeadm.go:322] This error is likely caused by:
	I0216 18:02:50.221998  345500 kubeadm.go:322] 	- The kubelet is not running
	I0216 18:02:50.222100  345500 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 18:02:50.222110  345500 kubeadm.go:322] 
	I0216 18:02:50.222207  345500 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 18:02:50.222241  345500 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 18:02:50.222274  345500 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 18:02:50.222283  345500 kubeadm.go:322] 
	I0216 18:02:50.222386  345500 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 18:02:50.222479  345500 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 18:02:50.222568  345500 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 18:02:50.222615  345500 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 18:02:50.222689  345500 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 18:02:50.222722  345500 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 18:02:50.234313  345500 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 18:02:50.234499  345500 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 18:02:50.234715  345500 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 18:02:50.234831  345500 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 18:02:50.234923  345500 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 18:02:50.235003  345500 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 18:02:50.235172  345500 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 18:02:50.235224  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 18:02:51.068384  345500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 18:02:51.081838  345500 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 18:02:51.081933  345500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 18:02:51.091862  345500 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 18:02:51.091911  345500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 18:02:51.158051  345500 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 18:02:51.158368  345500 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 18:02:51.368027  345500 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 18:02:51.368102  345500 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 18:02:51.368152  345500 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 18:02:51.368188  345500 kubeadm.go:322] OS: Linux
	I0216 18:02:51.368234  345500 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 18:02:51.368282  345500 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 18:02:51.368333  345500 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 18:02:51.368381  345500 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 18:02:51.368435  345500 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 18:02:51.368482  345500 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 18:02:51.467404  345500 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 18:02:51.467512  345500 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 18:02:51.467604  345500 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 18:02:51.647725  345500 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 18:02:51.649456  345500 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 18:02:51.658891  345500 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 18:02:51.764984  345500 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 18:02:51.770068  345500 out.go:204]   - Generating certificates and keys ...
	I0216 18:02:51.770161  345500 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 18:02:51.770229  345500 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 18:02:51.770322  345500 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 18:02:51.770395  345500 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 18:02:51.770481  345500 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 18:02:51.770544  345500 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 18:02:51.770616  345500 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 18:02:51.770687  345500 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 18:02:51.770777  345500 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 18:02:51.770861  345500 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 18:02:51.770911  345500 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 18:02:51.770974  345500 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 18:02:52.229219  345500 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 18:02:52.917988  345500 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 18:02:53.854362  345500 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 18:02:54.800326  345500 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 18:02:54.801345  345500 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 18:02:54.803523  345500 out.go:204]   - Booting up control plane ...
	I0216 18:02:54.803618  345500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 18:02:54.817004  345500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 18:02:54.822134  345500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 18:02:54.832027  345500 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 18:02:54.832191  345500 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 18:03:34.832041  345500 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 18:06:54.833058  345500 kubeadm.go:322] 
	I0216 18:06:54.833130  345500 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 18:06:54.833171  345500 kubeadm.go:322] 	timed out waiting for the condition
	I0216 18:06:54.833181  345500 kubeadm.go:322] 
	I0216 18:06:54.833214  345500 kubeadm.go:322] This error is likely caused by:
	I0216 18:06:54.833270  345500 kubeadm.go:322] 	- The kubelet is not running
	I0216 18:06:54.833421  345500 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 18:06:54.833437  345500 kubeadm.go:322] 
	I0216 18:06:54.833539  345500 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 18:06:54.833581  345500 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 18:06:54.833629  345500 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 18:06:54.833641  345500 kubeadm.go:322] 
	I0216 18:06:54.833754  345500 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 18:06:54.833858  345500 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 18:06:54.833940  345500 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 18:06:54.833998  345500 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 18:06:54.834075  345500 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 18:06:54.834107  345500 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 18:06:54.837607  345500 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 18:06:54.837754  345500 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 18:06:54.837991  345500 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 18:06:54.838112  345500 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 18:06:54.838197  345500 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 18:06:54.838259  345500 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 18:06:54.838325  345500 kubeadm.go:406] StartCluster complete in 12m31.378806665s
	I0216 18:06:54.838408  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 18:06:54.855047  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.855075  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 18:06:54.855139  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 18:06:54.873327  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.873349  345500 logs.go:278] No container was found matching "etcd"
	I0216 18:06:54.873408  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 18:06:54.891087  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.891113  345500 logs.go:278] No container was found matching "coredns"
	I0216 18:06:54.891174  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 18:06:54.909506  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.909531  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 18:06:54.909590  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 18:06:54.927178  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.927200  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 18:06:54.927262  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 18:06:54.945838  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.945860  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 18:06:54.945919  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 18:06:54.962777  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.962800  345500 logs.go:278] No container was found matching "kindnet"
	I0216 18:06:54.962864  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 18:06:54.979838  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.979859  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 18:06:54.979871  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 18:06:54.979884  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 18:06:55.000522  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 18:06:55.000555  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 18:06:55.080113  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 18:06:55.080175  345500 logs.go:123] Gathering logs for Docker ...
	I0216 18:06:55.080201  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 18:06:55.100957  345500 logs.go:123] Gathering logs for container status ...
	I0216 18:06:55.100994  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 18:06:55.142764  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 18:06:55.142792  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 18:06:55.174050  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:33 old-k8s-version-488384 kubelet[10015]: E0216 18:06:33.601902   10015 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 18:06:55.177196  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:34 old-k8s-version-488384 kubelet[10015]: E0216 18:06:34.598572   10015 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 18:06:55.180199  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:35 old-k8s-version-488384 kubelet[10015]: E0216 18:06:35.599305   10015 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 18:06:55.185749  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:37 old-k8s-version-488384 kubelet[10015]: E0216 18:06:37.600892   10015 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 18:06:55.203743  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:45 old-k8s-version-488384 kubelet[10015]: E0216 18:06:45.599731   10015 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 18:06:55.206538  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:46 old-k8s-version-488384 kubelet[10015]: E0216 18:06:46.599194   10015 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 18:06:55.211346  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:48 old-k8s-version-488384 kubelet[10015]: E0216 18:06:48.598609   10015 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 18:06:55.214175  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:49 old-k8s-version-488384 kubelet[10015]: E0216 18:06:49.597666   10015 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 18:06:55.226222  345500 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 18:06:55.226277  345500 out.go:239] * 
	* 
	W0216 18:06:55.226473  345500 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 18:06:55.226511  345500 out.go:239] * 
	* 
	W0216 18:06:55.227563  345500 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 18:06:55.230540  345500 out.go:177] X Problems detected in kubelet:
	I0216 18:06:55.232806  345500 out.go:177]   Feb 16 18:06:33 old-k8s-version-488384 kubelet[10015]: E0216 18:06:33.601902   10015 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 18:06:55.235603  345500 out.go:177]   Feb 16 18:06:34 old-k8s-version-488384 kubelet[10015]: E0216 18:06:34.598572   10015 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 18:06:55.240627  345500 out.go:177]   Feb 16 18:06:35 old-k8s-version-488384 kubelet[10015]: E0216 18:06:35.599305   10015 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 18:06:55.250380  345500 out.go:177] 
	W0216 18:06:55.259601  345500 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 18:06:55.259690  345500 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 18:06:55.259715  345500 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 18:06:55.268587  345500 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-488384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-488384
helpers_test.go:235: (dbg) docker inspect old-k8s-version-488384:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d",
	        "Created": "2024-02-16T17:43:51.781636674Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 345680,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:54:09.894404762Z",
	            "FinishedAt": "2024-02-16T17:54:08.299046886Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hosts",
	        "LogPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d-json.log",
	        "Name": "/old-k8s-version-488384",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-488384:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-488384",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621-init/diff:/var/lib/docker/overlay2/946a7b4f2791bd4745aa26fd1fdd5eefb03c154f3c1fd517458d1937bbb85039/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-488384",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-488384/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-488384",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c66234ad4617bc50a90452cc97feb6068a7da7d63af736570cfde4ddcd6338c7",
	            "SandboxKey": "/var/run/docker/netns/c66234ad4617",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-488384": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2ad7a05058fe",
	                        "old-k8s-version-488384"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "066a8ce33ebb9a8bf9130346706c7668acc42f9f2a9352243a5b99995ed10eb4",
	                    "EndpointID": "d450be87f3ed3d1e3561d8cc627e39f3e3bcf740069efe096870713ebb0ad0af",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-488384",
	                        "2ad7a05058fe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384: exit status 2 (326.447228ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-488384 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | no-preload-323647 image list                           | no-preload-323647            | jenkins | v1.32.0 | 16 Feb 24 17:52 UTC | 16 Feb 24 17:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-323647                                   | no-preload-323647            | jenkins | v1.32.0 | 16 Feb 24 17:52 UTC | 16 Feb 24 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-323647                                   | no-preload-323647            | jenkins | v1.32.0 | 16 Feb 24 17:52 UTC | 16 Feb 24 17:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-323647                                   | no-preload-323647            | jenkins | v1.32.0 | 16 Feb 24 17:52 UTC | 16 Feb 24 17:52 UTC |
	| delete  | -p no-preload-323647                                   | no-preload-323647            | jenkins | v1.32.0 | 16 Feb 24 17:52 UTC | 16 Feb 24 17:52 UTC |
	| start   | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:52 UTC | 16 Feb 24 17:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-488384        | old-k8s-version-488384       | jenkins | v1.32.0 | 16 Feb 24 17:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-198397            | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:53 UTC | 16 Feb 24 17:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:53 UTC | 16 Feb 24 17:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-198397                 | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:53 UTC | 16 Feb 24 17:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:53 UTC | 16 Feb 24 17:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-488384                              | old-k8s-version-488384       | jenkins | v1.32.0 | 16 Feb 24 17:54 UTC | 16 Feb 24 17:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-488384             | old-k8s-version-488384       | jenkins | v1.32.0 | 16 Feb 24 17:54 UTC | 16 Feb 24 17:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-488384                              | old-k8s-version-488384       | jenkins | v1.32.0 | 16 Feb 24 17:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-198397 image list                          | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:58 UTC | 16 Feb 24 17:58 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:58 UTC | 16 Feb 24 17:59 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:59 UTC | 16 Feb 24 17:59 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:59 UTC | 16 Feb 24 17:59 UTC |
	| delete  | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:59 UTC | 16 Feb 24 17:59 UTC |
	| delete  | -p                                                     | disable-driver-mounts-083322 | jenkins | v1.32.0 | 16 Feb 24 17:59 UTC | 16 Feb 24 17:59 UTC |
	|         | disable-driver-mounts-083322                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 17:59 UTC | 16 Feb 24 18:00 UTC |
	|         | default-k8s-diff-port-396551                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-396551  | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:00 UTC | 16 Feb 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:00 UTC | 16 Feb 24 18:00 UTC |
	|         | default-k8s-diff-port-396551                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-396551       | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:00 UTC | 16 Feb 24 18:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:00 UTC | 16 Feb 24 18:06 UTC |
	|         | default-k8s-diff-port-396551                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 18:00:49
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 18:00:49.810755  367199 out.go:291] Setting OutFile to fd 1 ...
	I0216 18:00:49.810970  367199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 18:00:49.810995  367199 out.go:304] Setting ErrFile to fd 2...
	I0216 18:00:49.811015  367199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 18:00:49.811319  367199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 18:00:49.811724  367199 out.go:298] Setting JSON to false
	I0216 18:00:49.814054  367199 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6200,"bootTime":1708100250,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 18:00:49.814241  367199 start.go:139] virtualization:  
	I0216 18:00:49.817799  367199 out.go:177] * [default-k8s-diff-port-396551] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 18:00:49.820873  367199 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 18:00:49.820941  367199 notify.go:220] Checking for updates...
	I0216 18:00:49.823417  367199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 18:00:49.826108  367199 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 18:00:49.828775  367199 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 18:00:49.831760  367199 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0216 18:00:49.834024  367199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 18:00:49.836919  367199 config.go:182] Loaded profile config "default-k8s-diff-port-396551": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 18:00:49.837517  367199 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 18:00:49.860038  367199 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 18:00:49.860220  367199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 18:00:49.928735  367199 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-16 18:00:49.918740253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 18:00:49.928851  367199 docker.go:295] overlay module found
	I0216 18:00:49.932747  367199 out.go:177] * Using the docker driver based on existing profile
	I0216 18:00:49.934927  367199 start.go:299] selected driver: docker
	I0216 18:00:49.934946  367199 start.go:903] validating driver "docker" against &{Name:default-k8s-diff-port-396551 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-396551 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 18:00:49.935050  367199 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 18:00:49.935661  367199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 18:00:50.001118  367199 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-16 18:00:49.988672971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 18:00:50.001485  367199 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0216 18:00:50.001570  367199 cni.go:84] Creating CNI manager for ""
	I0216 18:00:50.001592  367199 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 18:00:50.001614  367199 start_flags.go:323] config:
	{Name:default-k8s-diff-port-396551 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-396551 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 18:00:50.005597  367199 out.go:177] * Starting control plane node default-k8s-diff-port-396551 in cluster default-k8s-diff-port-396551
	I0216 18:00:50.007680  367199 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 18:00:50.009946  367199 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 18:00:50.011678  367199 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 18:00:50.011756  367199 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0216 18:00:50.011789  367199 cache.go:56] Caching tarball of preloaded images
	I0216 18:00:50.011881  367199 preload.go:174] Found /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0216 18:00:50.011897  367199 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0216 18:00:50.012024  367199 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 18:00:50.012234  367199 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/config.json ...
	I0216 18:00:50.030976  367199 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 18:00:50.031002  367199 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 18:00:50.031028  367199 cache.go:194] Successfully downloaded all kic artifacts
	I0216 18:00:50.031059  367199 start.go:365] acquiring machines lock for default-k8s-diff-port-396551: {Name:mk0fdd673022dc57bc21b2259c4c264d05212686 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 18:00:50.031146  367199 start.go:369] acquired machines lock for "default-k8s-diff-port-396551" in 55.606µs
	I0216 18:00:50.031172  367199 start.go:96] Skipping create...Using existing machine configuration
	I0216 18:00:50.031190  367199 fix.go:54] fixHost starting: 
	I0216 18:00:50.031469  367199 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-396551 --format={{.State.Status}}
	I0216 18:00:50.048473  367199 fix.go:102] recreateIfNeeded on default-k8s-diff-port-396551: state=Stopped err=<nil>
	W0216 18:00:50.048506  367199 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 18:00:50.051077  367199 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-396551" ...
	I0216 18:00:50.052980  367199 cli_runner.go:164] Run: docker start default-k8s-diff-port-396551
	I0216 18:00:50.397578  367199 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-396551 --format={{.State.Status}}
	I0216 18:00:50.425398  367199 kic.go:430] container "default-k8s-diff-port-396551" state is running.
	I0216 18:00:50.425786  367199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-396551
	I0216 18:00:50.449584  367199 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/config.json ...
	I0216 18:00:50.449814  367199 machine.go:88] provisioning docker machine ...
	I0216 18:00:50.449835  367199 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-396551"
	I0216 18:00:50.449894  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:00:50.470783  367199 main.go:141] libmachine: Using SSH client type: native
	I0216 18:00:50.471249  367199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33092 <nil> <nil>}
	I0216 18:00:50.471270  367199 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-396551 && echo "default-k8s-diff-port-396551" | sudo tee /etc/hostname
	I0216 18:00:50.472104  367199 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0216 18:00:53.624450  367199 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-396551
	
	I0216 18:00:53.624553  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:00:53.641669  367199 main.go:141] libmachine: Using SSH client type: native
	I0216 18:00:53.642077  367199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33092 <nil> <nil>}
	I0216 18:00:53.642101  367199 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-396551' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-396551/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-396551' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 18:00:53.784962  367199 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 18:00:53.784990  367199 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-2208/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-2208/.minikube}
	I0216 18:00:53.785014  367199 ubuntu.go:177] setting up certificates
	I0216 18:00:53.785034  367199 provision.go:83] configureAuth start
	I0216 18:00:53.785104  367199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-396551
	I0216 18:00:53.801514  367199 provision.go:138] copyHostCerts
	I0216 18:00:53.801585  367199 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem, removing ...
	I0216 18:00:53.801597  367199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem
	I0216 18:00:53.801678  367199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem (1078 bytes)
	I0216 18:00:53.801775  367199 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem, removing ...
	I0216 18:00:53.801784  367199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem
	I0216 18:00:53.801810  367199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem (1123 bytes)
	I0216 18:00:53.801864  367199 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem, removing ...
	I0216 18:00:53.801872  367199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem
	I0216 18:00:53.801896  367199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem (1675 bytes)
	I0216 18:00:53.802271  367199 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-396551 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-396551]
	I0216 18:00:54.040814  367199 provision.go:172] copyRemoteCerts
	I0216 18:00:54.040887  367199 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 18:00:54.040932  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:00:54.057665  367199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/default-k8s-diff-port-396551/id_rsa Username:docker}
	I0216 18:00:54.158845  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0216 18:00:54.183636  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0216 18:00:54.207689  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 18:00:54.231701  367199 provision.go:86] duration metric: configureAuth took 446.646599ms
	I0216 18:00:54.231726  367199 ubuntu.go:193] setting minikube options for container-runtime
	I0216 18:00:54.231954  367199 config.go:182] Loaded profile config "default-k8s-diff-port-396551": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 18:00:54.232020  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:00:54.248124  367199 main.go:141] libmachine: Using SSH client type: native
	I0216 18:00:54.248558  367199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33092 <nil> <nil>}
	I0216 18:00:54.248574  367199 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 18:00:54.389197  367199 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 18:00:54.389221  367199 ubuntu.go:71] root file system type: overlay
	I0216 18:00:54.389360  367199 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 18:00:54.389431  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:00:54.405850  367199 main.go:141] libmachine: Using SSH client type: native
	I0216 18:00:54.406271  367199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33092 <nil> <nil>}
	I0216 18:00:54.406355  367199 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 18:00:54.556140  367199 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 18:00:54.556222  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:00:54.572704  367199 main.go:141] libmachine: Using SSH client type: native
	I0216 18:00:54.573111  367199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33092 <nil> <nil>}
	I0216 18:00:54.573149  367199 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 18:00:54.718127  367199 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 18:00:54.718150  367199 machine.go:91] provisioned docker machine in 4.268319617s
	I0216 18:00:54.718167  367199 start.go:300] post-start starting for "default-k8s-diff-port-396551" (driver="docker")
	I0216 18:00:54.718179  367199 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 18:00:54.718240  367199 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 18:00:54.718290  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:00:54.738316  367199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/default-k8s-diff-port-396551/id_rsa Username:docker}
	I0216 18:00:54.838009  367199 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 18:00:54.841220  367199 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 18:00:54.841258  367199 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 18:00:54.841270  367199 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 18:00:54.841279  367199 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 18:00:54.841292  367199 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/addons for local assets ...
	I0216 18:00:54.841356  367199 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/files for local assets ...
	I0216 18:00:54.841445  367199 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem -> 75132.pem in /etc/ssl/certs
	I0216 18:00:54.841551  367199 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 18:00:54.849897  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /etc/ssl/certs/75132.pem (1708 bytes)
	I0216 18:00:54.874260  367199 start.go:303] post-start completed in 156.077655ms
	I0216 18:00:54.874403  367199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 18:00:54.874487  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:00:54.892070  367199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/default-k8s-diff-port-396551/id_rsa Username:docker}
	I0216 18:00:54.989628  367199 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 18:00:54.994200  367199 fix.go:56] fixHost completed within 4.963009449s
	I0216 18:00:54.994226  367199 start.go:83] releasing machines lock for "default-k8s-diff-port-396551", held for 4.963066755s
	I0216 18:00:54.994309  367199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-396551
	I0216 18:00:55.015261  367199 ssh_runner.go:195] Run: cat /version.json
	I0216 18:00:55.015333  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:00:55.015596  367199 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 18:00:55.015640  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:00:55.038863  367199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/default-k8s-diff-port-396551/id_rsa Username:docker}
	I0216 18:00:55.038962  367199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/default-k8s-diff-port-396551/id_rsa Username:docker}
	I0216 18:00:55.275942  367199 ssh_runner.go:195] Run: systemctl --version
	I0216 18:00:55.280225  367199 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 18:00:55.284400  367199 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 18:00:55.303127  367199 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 18:00:55.303253  367199 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0216 18:00:55.312246  367199 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0216 18:00:55.312313  367199 start.go:475] detecting cgroup driver to use...
	I0216 18:00:55.312358  367199 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 18:00:55.312467  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 18:00:55.328845  367199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0216 18:00:55.338873  367199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 18:00:55.349008  367199 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 18:00:55.349125  367199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 18:00:55.359009  367199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 18:00:55.368924  367199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 18:00:55.378953  367199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 18:00:55.388935  367199 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 18:00:55.398393  367199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 18:00:55.407972  367199 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 18:00:55.416887  367199 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 18:00:55.425078  367199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 18:00:55.517787  367199 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 18:00:55.628980  367199 start.go:475] detecting cgroup driver to use...
	I0216 18:00:55.629024  367199 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 18:00:55.629075  367199 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 18:00:55.643166  367199 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 18:00:55.643234  367199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 18:00:55.657354  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 18:00:55.674755  367199 ssh_runner.go:195] Run: which cri-dockerd
	I0216 18:00:55.679050  367199 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 18:00:55.688429  367199 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 18:00:55.708083  367199 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 18:00:55.805146  367199 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 18:00:55.918444  367199 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 18:00:55.918580  367199 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 18:00:55.937773  367199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 18:00:56.037243  367199 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 18:00:56.529204  367199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0216 18:00:56.540805  367199 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0216 18:00:56.553719  367199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 18:00:56.565345  367199 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0216 18:00:56.647603  367199 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0216 18:00:56.741608  367199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 18:00:56.830390  367199 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0216 18:00:56.844390  367199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 18:00:56.856121  367199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 18:00:56.955514  367199 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0216 18:00:57.040272  367199 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0216 18:00:57.040404  367199 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0216 18:00:57.046277  367199 start.go:543] Will wait 60s for crictl version
	I0216 18:00:57.046386  367199 ssh_runner.go:195] Run: which crictl
	I0216 18:00:57.050222  367199 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0216 18:00:57.102465  367199 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0216 18:00:57.102602  367199 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 18:00:57.125957  367199 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 18:00:57.153822  367199 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0216 18:00:57.153930  367199 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-396551 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 18:00:57.170351  367199 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0216 18:00:57.174090  367199 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 18:00:57.185397  367199 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 18:00:57.185470  367199 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 18:00:57.203725  367199 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0216 18:00:57.203752  367199 docker.go:615] Images already preloaded, skipping extraction
	I0216 18:00:57.203818  367199 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 18:00:57.221581  367199 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0216 18:00:57.221602  367199 cache_images.go:84] Images are preloaded, skipping loading
	I0216 18:00:57.221665  367199 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 18:00:57.276767  367199 cni.go:84] Creating CNI manager for ""
	I0216 18:00:57.276796  367199 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 18:00:57.276813  367199 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0216 18:00:57.276833  367199 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-396551 NodeName:default-k8s-diff-port-396551 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0216 18:00:57.276975  367199 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-396551"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 18:00:57.277050  367199 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-396551 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-396551 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0216 18:00:57.277120  367199 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0216 18:00:57.286379  367199 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 18:00:57.286451  367199 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 18:00:57.295183  367199 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0216 18:00:57.313508  367199 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0216 18:00:57.331338  367199 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
	I0216 18:00:57.349911  367199 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0216 18:00:57.353317  367199 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 18:00:57.364043  367199 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551 for IP: 192.168.76.2
	I0216 18:00:57.364074  367199 certs.go:190] acquiring lock for shared ca certs: {Name:mkc4dfb4b2b1da0d6a80fb9567025307b764443b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 18:00:57.364211  367199 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key
	I0216 18:00:57.364257  367199 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key
	I0216 18:00:57.364335  367199 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.key
	I0216 18:00:57.364396  367199 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/apiserver.key.31bdca25
	I0216 18:00:57.364435  367199 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/proxy-client.key
	I0216 18:00:57.364546  367199 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem (1338 bytes)
	W0216 18:00:57.364573  367199 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513_empty.pem, impossibly tiny 0 bytes
	I0216 18:00:57.364583  367199 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 18:00:57.364608  367199 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem (1078 bytes)
	I0216 18:00:57.364631  367199 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem (1123 bytes)
	I0216 18:00:57.364715  367199 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem (1675 bytes)
	I0216 18:00:57.364762  367199 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem (1708 bytes)
	I0216 18:00:57.365379  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 18:00:57.390277  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 18:00:57.414895  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 18:00:57.439869  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 18:00:57.464431  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 18:00:57.489128  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0216 18:00:57.513154  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 18:00:57.538208  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0216 18:00:57.562436  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 18:00:57.587301  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem --> /usr/share/ca-certificates/7513.pem (1338 bytes)
	I0216 18:00:57.612290  367199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /usr/share/ca-certificates/75132.pem (1708 bytes)
	I0216 18:00:57.637479  367199 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 18:00:57.655717  367199 ssh_runner.go:195] Run: openssl version
	I0216 18:00:57.661373  367199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 18:00:57.671473  367199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 18:00:57.675194  367199 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0216 18:00:57.675259  367199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 18:00:57.682482  367199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 18:00:57.691599  367199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7513.pem && ln -fs /usr/share/ca-certificates/7513.pem /etc/ssl/certs/7513.pem"
	I0216 18:00:57.701268  367199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7513.pem
	I0216 18:00:57.704696  367199 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:48 /usr/share/ca-certificates/7513.pem
	I0216 18:00:57.704768  367199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7513.pem
	I0216 18:00:57.711890  367199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7513.pem /etc/ssl/certs/51391683.0"
	I0216 18:00:57.720928  367199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75132.pem && ln -fs /usr/share/ca-certificates/75132.pem /etc/ssl/certs/75132.pem"
	I0216 18:00:57.730941  367199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75132.pem
	I0216 18:00:57.734517  367199 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:48 /usr/share/ca-certificates/75132.pem
	I0216 18:00:57.734604  367199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75132.pem
	I0216 18:00:57.741968  367199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75132.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 18:00:57.751190  367199 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 18:00:57.754651  367199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 18:00:57.761437  367199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 18:00:57.768392  367199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 18:00:57.775530  367199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 18:00:57.782886  367199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 18:00:57.790085  367199 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 18:00:57.797055  367199 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-396551 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-396551 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 18:00:57.797296  367199 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 18:00:57.814859  367199 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 18:00:57.824242  367199 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 18:00:57.824267  367199 kubeadm.go:636] restartCluster start
	I0216 18:00:57.824322  367199 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 18:00:57.832934  367199 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:00:57.833498  367199 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-396551" does not appear in /home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 18:00:57.833707  367199 kubeconfig.go:146] "default-k8s-diff-port-396551" context is missing from /home/jenkins/minikube-integration/17936-2208/kubeconfig - will repair!
	I0216 18:00:57.834134  367199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/kubeconfig: {Name:mk22ab392afde309b066ab7073c4430ce25196e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 18:00:57.835628  367199 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 18:00:57.844552  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:00:57.844664  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:00:57.854805  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:00:58.345482  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:00:58.345585  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:00:58.355931  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:00:58.845633  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:00:58.845731  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:00:58.857337  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:00:59.344694  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:00:59.344786  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:00:59.355123  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:00:59.844987  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:00:59.845070  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:00:59.855430  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:00.345043  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:00.345203  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:00.356998  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:00.844579  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:00.844705  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:00.854983  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:01.345650  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:01.345754  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:01.356101  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:01.844613  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:01.844737  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:01.856204  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:02.344795  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:02.344881  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:02.356041  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:02.844681  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:02.844787  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:02.855166  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:03.344686  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:03.344801  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:03.354864  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:03.845489  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:03.845588  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:03.856090  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:04.344703  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:04.344810  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:04.355146  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:04.844913  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:04.844995  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:04.855570  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:05.345229  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:05.345311  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:05.355616  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:05.844725  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:05.844846  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:05.855238  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:06.344695  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:06.344816  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:06.355347  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:06.844831  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:06.844929  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:06.855309  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:07.344949  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:07.345054  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:07.355623  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:07.845428  367199 api_server.go:166] Checking apiserver status ...
	I0216 18:01:07.845529  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:01:07.855893  367199 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:07.855918  367199 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0216 18:01:07.855938  367199 kubeadm.go:1135] stopping kube-system containers ...
	I0216 18:01:07.855996  367199 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 18:01:07.875467  367199 docker.go:483] Stopping containers: [271da381172e 6fb164316b2b 78c6568b03bf 2e7c62f60413 6f2f49dac5e9 609cf0695a21 40936fb1f67b 8070f592e215 cecd91a24a9f 52c8d8801a3f d4211ff78120 d0b4e7d2cfb6 17990b971519 642790b91358 af66ce115def 7cd4674bcf65 4b691a59358d]
	I0216 18:01:07.875542  367199 ssh_runner.go:195] Run: docker stop 271da381172e 6fb164316b2b 78c6568b03bf 2e7c62f60413 6f2f49dac5e9 609cf0695a21 40936fb1f67b 8070f592e215 cecd91a24a9f 52c8d8801a3f d4211ff78120 d0b4e7d2cfb6 17990b971519 642790b91358 af66ce115def 7cd4674bcf65 4b691a59358d
	I0216 18:01:07.896719  367199 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 18:01:07.910322  367199 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 18:01:07.919864  367199 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 16 17:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 16 17:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Feb 16 17:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 16 17:59 /etc/kubernetes/scheduler.conf
	
	I0216 18:01:07.919938  367199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0216 18:01:07.929236  367199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0216 18:01:07.938796  367199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0216 18:01:07.947710  367199 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:07.947777  367199 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0216 18:01:07.956625  367199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0216 18:01:07.966130  367199 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:01:07.966210  367199 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0216 18:01:07.974617  367199 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 18:01:07.983986  367199 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 18:01:07.984010  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:01:08.043191  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:01:10.642225  367199 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.598949944s)
	I0216 18:01:10.642255  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:01:10.798791  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:01:10.864018  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:01:10.935419  367199 api_server.go:52] waiting for apiserver process to appear ...
	I0216 18:01:10.935490  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 18:01:11.435627  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 18:01:11.935900  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 18:01:11.972171  367199 api_server.go:72] duration metric: took 1.036752021s to wait for apiserver process to appear ...
	I0216 18:01:11.972197  367199 api_server.go:88] waiting for apiserver healthz status ...
	I0216 18:01:11.972216  367199 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0216 18:01:11.972478  367199 api_server.go:269] stopped: https://192.168.76.2:8444/healthz: Get "https://192.168.76.2:8444/healthz": dial tcp 192.168.76.2:8444: connect: connection refused
	I0216 18:01:12.473243  367199 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0216 18:01:16.568547  367199 api_server.go:279] https://192.168.76.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 18:01:16.568577  367199 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 18:01:16.568590  367199 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0216 18:01:16.662829  367199 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 18:01:16.662920  367199 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 18:01:16.973099  367199 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0216 18:01:16.982997  367199 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 18:01:16.983077  367199 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 18:01:17.472288  367199 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0216 18:01:17.482342  367199 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 18:01:17.482424  367199 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 18:01:17.972758  367199 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0216 18:01:17.986763  367199 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0216 18:01:18.003338  367199 api_server.go:141] control plane version: v1.28.4
	I0216 18:01:18.003445  367199 api_server.go:131] duration metric: took 6.031240239s to wait for apiserver health ...
	I0216 18:01:18.003472  367199 cni.go:84] Creating CNI manager for ""
	I0216 18:01:18.003515  367199 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 18:01:18.006672  367199 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 18:01:18.009206  367199 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 18:01:18.049340  367199 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 18:01:18.112472  367199 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 18:01:18.123862  367199 system_pods.go:59] 9 kube-system pods found
	I0216 18:01:18.124067  367199 system_pods.go:61] "coredns-5dd5756b68-jjsh8" [ac1fd4e1-db2e-461c-8a1e-5fe5ca3d4442] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0216 18:01:18.124093  367199 system_pods.go:61] "coredns-5dd5756b68-st2xd" [945f970e-8729-47ad-9dd2-40c422a9cdf7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0216 18:01:18.124144  367199 system_pods.go:61] "etcd-default-k8s-diff-port-396551" [d07defad-57a9-467c-b5f3-49fd1b276d9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 18:01:18.124179  367199 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-396551" [def5fa71-170f-42fe-aeb1-79b26fc2bd52] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 18:01:18.124206  367199 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-396551" [890f8006-efee-4173-b278-b9d4a417f554] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 18:01:18.124231  367199 system_pods.go:61] "kube-proxy-tcz5n" [04eb3ae2-e17c-46a8-a753-0d61addf432a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0216 18:01:18.124261  367199 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-396551" [090991d6-a75b-454b-9ece-20c5556b4053] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 18:01:18.124291  367199 system_pods.go:61] "metrics-server-57f55c9bc5-8f5gx" [0894e53d-c8c3-46bb-b4c3-13b2f955a9e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 18:01:18.124317  367199 system_pods.go:61] "storage-provisioner" [95a2628f-67ea-465a-90b1-b636cfb95b8e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0216 18:01:18.124347  367199 system_pods.go:74] duration metric: took 11.811432ms to wait for pod list to return data ...
	I0216 18:01:18.124386  367199 node_conditions.go:102] verifying NodePressure condition ...
	I0216 18:01:18.137132  367199 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0216 18:01:18.137241  367199 node_conditions.go:123] node cpu capacity is 2
	I0216 18:01:18.137269  367199 node_conditions.go:105] duration metric: took 12.865243ms to run NodePressure ...
	I0216 18:01:18.137313  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:01:18.491314  367199 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0216 18:01:18.497167  367199 kubeadm.go:787] kubelet initialised
	I0216 18:01:18.497234  367199 kubeadm.go:788] duration metric: took 5.855734ms waiting for restarted kubelet to initialise ...
	I0216 18:01:18.497259  367199 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 18:01:18.505223  367199 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:20.512525  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:23.012849  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:25.512074  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:27.512146  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:29.512433  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:31.512525  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:34.013382  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:36.512149  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:39.012152  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:41.512010  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:43.512327  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:45.512530  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:47.513952  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:50.013700  367199 pod_ready.go:102] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:51.512043  367199 pod_ready.go:92] pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace has status "Ready":"True"
	I0216 18:01:51.512068  367199 pod_ready.go:81] duration metric: took 33.006661162s waiting for pod "coredns-5dd5756b68-jjsh8" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:51.512079  367199 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-st2xd" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:53.518403  367199 pod_ready.go:102] pod "coredns-5dd5756b68-st2xd" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:55.518978  367199 pod_ready.go:102] pod "coredns-5dd5756b68-st2xd" in "kube-system" namespace has status "Ready":"False"
	I0216 18:01:56.018892  367199 pod_ready.go:92] pod "coredns-5dd5756b68-st2xd" in "kube-system" namespace has status "Ready":"True"
	I0216 18:01:56.018926  367199 pod_ready.go:81] duration metric: took 4.50683453s waiting for pod "coredns-5dd5756b68-st2xd" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.018938  367199 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.025077  367199 pod_ready.go:92] pod "etcd-default-k8s-diff-port-396551" in "kube-system" namespace has status "Ready":"True"
	I0216 18:01:56.025145  367199 pod_ready.go:81] duration metric: took 6.198367ms waiting for pod "etcd-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.025171  367199 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.037092  367199 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-396551" in "kube-system" namespace has status "Ready":"True"
	I0216 18:01:56.037119  367199 pod_ready.go:81] duration metric: took 11.928783ms waiting for pod "kube-apiserver-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.037132  367199 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.042762  367199 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-396551" in "kube-system" namespace has status "Ready":"True"
	I0216 18:01:56.042787  367199 pod_ready.go:81] duration metric: took 5.647042ms waiting for pod "kube-controller-manager-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.042800  367199 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tcz5n" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.048620  367199 pod_ready.go:92] pod "kube-proxy-tcz5n" in "kube-system" namespace has status "Ready":"True"
	I0216 18:01:56.048654  367199 pod_ready.go:81] duration metric: took 5.846314ms waiting for pod "kube-proxy-tcz5n" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.048665  367199 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.416688  367199 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-396551" in "kube-system" namespace has status "Ready":"True"
	I0216 18:01:56.416711  367199 pod_ready.go:81] duration metric: took 368.037136ms waiting for pod "kube-scheduler-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:56.416742  367199 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace to be "Ready" ...
	I0216 18:01:58.423104  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:00.424591  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:02.924022  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:05.422806  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:07.423464  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:09.424459  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:11.924257  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:14.423701  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:16.923248  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:19.426273  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:21.923741  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:24.423448  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:26.423508  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:28.923676  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:31.422363  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:33.423168  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:35.423532  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:37.923182  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:39.923772  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:42.422947  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:44.922780  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:46.923529  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:49.423614  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:50.221806  345500 kubeadm.go:322] 
	I0216 18:02:50.221879  345500 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 18:02:50.221922  345500 kubeadm.go:322] 	timed out waiting for the condition
	I0216 18:02:50.221933  345500 kubeadm.go:322] 
	I0216 18:02:50.221965  345500 kubeadm.go:322] This error is likely caused by:
	I0216 18:02:50.221998  345500 kubeadm.go:322] 	- The kubelet is not running
	I0216 18:02:50.222100  345500 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 18:02:50.222110  345500 kubeadm.go:322] 
	I0216 18:02:50.222207  345500 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 18:02:50.222241  345500 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 18:02:50.222274  345500 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 18:02:50.222283  345500 kubeadm.go:322] 
	I0216 18:02:50.222386  345500 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 18:02:50.222479  345500 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 18:02:50.222568  345500 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 18:02:50.222615  345500 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 18:02:50.222689  345500 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 18:02:50.222722  345500 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 18:02:50.234313  345500 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 18:02:50.234499  345500 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 18:02:50.234715  345500 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 18:02:50.234831  345500 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 18:02:50.234923  345500 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 18:02:50.235003  345500 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0216 18:02:50.235172  345500 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0216 18:02:50.235224  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0216 18:02:51.068384  345500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 18:02:51.081838  345500 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 18:02:51.081933  345500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 18:02:51.091862  345500 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 18:02:51.091911  345500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 18:02:51.158051  345500 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0216 18:02:51.158368  345500 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 18:02:51.368027  345500 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 18:02:51.368102  345500 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 18:02:51.368152  345500 kubeadm.go:322] DOCKER_VERSION: 25.0.3
	I0216 18:02:51.368188  345500 kubeadm.go:322] OS: Linux
	I0216 18:02:51.368234  345500 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 18:02:51.368282  345500 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 18:02:51.368333  345500 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 18:02:51.368381  345500 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 18:02:51.368435  345500 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 18:02:51.368482  345500 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 18:02:51.467404  345500 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 18:02:51.467512  345500 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 18:02:51.467604  345500 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 18:02:51.647725  345500 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 18:02:51.649456  345500 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 18:02:51.658891  345500 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0216 18:02:51.764984  345500 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 18:02:51.770068  345500 out.go:204]   - Generating certificates and keys ...
	I0216 18:02:51.770161  345500 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 18:02:51.770229  345500 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 18:02:51.770322  345500 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 18:02:51.770395  345500 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 18:02:51.770481  345500 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 18:02:51.770544  345500 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 18:02:51.770616  345500 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 18:02:51.770687  345500 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 18:02:51.770777  345500 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 18:02:51.770861  345500 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 18:02:51.770911  345500 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 18:02:51.770974  345500 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 18:02:52.229219  345500 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 18:02:52.917988  345500 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 18:02:53.854362  345500 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 18:02:54.800326  345500 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 18:02:54.801345  345500 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 18:02:51.424783  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:53.427550  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:54.803523  345500 out.go:204]   - Booting up control plane ...
	I0216 18:02:54.803618  345500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 18:02:54.817004  345500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 18:02:54.822134  345500 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 18:02:54.832027  345500 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 18:02:54.832191  345500 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 18:02:55.922777  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:02:57.923283  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:00.424154  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:02.922821  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:04.923248  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:06.923753  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:09.422483  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:11.423061  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:13.423240  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:15.923119  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:17.923789  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:20.423097  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:22.423459  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:24.924050  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:27.423422  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:29.923552  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:32.423482  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:34.832041  345500 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0216 18:03:34.923584  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:37.422713  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:39.423440  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:41.426355  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:43.923283  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:45.923491  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:48.423900  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:50.924204  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:53.422883  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:55.423269  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:57.424147  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:03:59.922729  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:01.923600  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:03.924322  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:06.424136  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:08.426501  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:10.923878  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:13.423487  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:15.423912  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:17.924152  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:20.423803  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:22.424381  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:24.923317  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:26.924234  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:29.423387  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:31.424428  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:33.923750  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:36.423071  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:38.423759  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:40.923649  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:43.423418  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:45.424425  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:47.923796  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:50.423066  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:52.423888  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:54.923341  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:56.923728  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:04:59.423463  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:01.923886  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:03.925244  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:06.424191  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:08.923954  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:11.424245  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:13.923704  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:16.424004  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:18.922992  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:20.923601  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:23.423451  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:25.923441  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:27.923505  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:29.923641  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:31.923826  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:34.423723  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:36.922883  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:38.925959  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:41.423508  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:43.423707  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:45.923150  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:47.924100  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:50.422828  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:52.922782  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:54.922872  367199 pod_ready.go:102] pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace has status "Ready":"False"
	I0216 18:05:56.417418  367199 pod_ready.go:81] duration metric: took 4m0.000650905s waiting for pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace to be "Ready" ...
	E0216 18:05:56.417460  367199 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-8f5gx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0216 18:05:56.417478  367199 pod_ready.go:38] duration metric: took 4m37.920191122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 18:05:56.417505  367199 kubeadm.go:640] restartCluster took 4m58.593232073s
	W0216 18:05:56.417567  367199 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0216 18:05:56.417598  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0216 18:06:04.613297  367199 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (8.195674394s)
	I0216 18:06:04.613372  367199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 18:06:04.626301  367199 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 18:06:04.635366  367199 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0216 18:06:04.635447  367199 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 18:06:04.644410  367199 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0216 18:06:04.644454  367199 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0216 18:06:04.688101  367199 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0216 18:06:04.688161  367199 kubeadm.go:322] [preflight] Running pre-flight checks
	I0216 18:06:04.741831  367199 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0216 18:06:04.741904  367199 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0216 18:06:04.741945  367199 kubeadm.go:322] OS: Linux
	I0216 18:06:04.741993  367199 kubeadm.go:322] CGROUPS_CPU: enabled
	I0216 18:06:04.742042  367199 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0216 18:06:04.742090  367199 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0216 18:06:04.742139  367199 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0216 18:06:04.742188  367199 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0216 18:06:04.742237  367199 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0216 18:06:04.742287  367199 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0216 18:06:04.742336  367199 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0216 18:06:04.742383  367199 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0216 18:06:04.818123  367199 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0216 18:06:04.818231  367199 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0216 18:06:04.818333  367199 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0216 18:06:05.146489  367199 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0216 18:06:05.149588  367199 out.go:204]   - Generating certificates and keys ...
	I0216 18:06:05.149690  367199 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0216 18:06:05.149784  367199 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0216 18:06:05.149873  367199 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0216 18:06:05.149939  367199 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0216 18:06:05.150057  367199 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0216 18:06:05.150678  367199 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0216 18:06:05.151354  367199 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0216 18:06:05.151991  367199 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0216 18:06:05.152680  367199 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0216 18:06:05.153319  367199 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0216 18:06:05.153740  367199 kubeadm.go:322] [certs] Using the existing "sa" key
	I0216 18:06:05.154072  367199 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0216 18:06:05.718789  367199 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0216 18:06:06.123065  367199 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0216 18:06:06.591182  367199 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0216 18:06:06.888191  367199 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0216 18:06:06.889203  367199 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0216 18:06:06.892316  367199 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0216 18:06:06.894524  367199 out.go:204]   - Booting up control plane ...
	I0216 18:06:06.894623  367199 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0216 18:06:06.894697  367199 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0216 18:06:06.895500  367199 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0216 18:06:06.910566  367199 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0216 18:06:06.911363  367199 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0216 18:06:06.911603  367199 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0216 18:06:07.015345  367199 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0216 18:06:15.024952  367199 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.011215 seconds
	I0216 18:06:15.025075  367199 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0216 18:06:15.046612  367199 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0216 18:06:15.575535  367199 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0216 18:06:15.576062  367199 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-396551 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0216 18:06:16.088415  367199 kubeadm.go:322] [bootstrap-token] Using token: fdaiye.13oii80yyuzbqbl8
	I0216 18:06:16.090366  367199 out.go:204]   - Configuring RBAC rules ...
	I0216 18:06:16.090491  367199 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0216 18:06:16.096053  367199 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0216 18:06:16.106311  367199 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0216 18:06:16.110203  367199 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0216 18:06:16.114370  367199 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0216 18:06:16.119476  367199 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0216 18:06:16.132805  367199 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0216 18:06:16.359853  367199 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0216 18:06:16.501250  367199 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0216 18:06:16.507548  367199 kubeadm.go:322] 
	I0216 18:06:16.507622  367199 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0216 18:06:16.507628  367199 kubeadm.go:322] 
	I0216 18:06:16.507700  367199 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0216 18:06:16.507705  367199 kubeadm.go:322] 
	I0216 18:06:16.507729  367199 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0216 18:06:16.508182  367199 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0216 18:06:16.508238  367199 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0216 18:06:16.508244  367199 kubeadm.go:322] 
	I0216 18:06:16.508294  367199 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0216 18:06:16.508299  367199 kubeadm.go:322] 
	I0216 18:06:16.508343  367199 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0216 18:06:16.508348  367199 kubeadm.go:322] 
	I0216 18:06:16.508396  367199 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0216 18:06:16.508466  367199 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0216 18:06:16.508530  367199 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0216 18:06:16.508534  367199 kubeadm.go:322] 
	I0216 18:06:16.508973  367199 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0216 18:06:16.509114  367199 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0216 18:06:16.509154  367199 kubeadm.go:322] 
	I0216 18:06:16.509468  367199 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token fdaiye.13oii80yyuzbqbl8 \
	I0216 18:06:16.509570  367199 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:985c0c270eb19ee200225b2f669d5c43e8649dded41ae1ed84720452ba5310cd \
	I0216 18:06:16.509788  367199 kubeadm.go:322] 	--control-plane 
	I0216 18:06:16.509799  367199 kubeadm.go:322] 
	I0216 18:06:16.510066  367199 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0216 18:06:16.510077  367199 kubeadm.go:322] 
	I0216 18:06:16.510380  367199 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token fdaiye.13oii80yyuzbqbl8 \
	I0216 18:06:16.510716  367199 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:985c0c270eb19ee200225b2f669d5c43e8649dded41ae1ed84720452ba5310cd 
	I0216 18:06:16.519631  367199 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 18:06:16.519742  367199 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 18:06:16.519758  367199 cni.go:84] Creating CNI manager for ""
	I0216 18:06:16.519773  367199 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 18:06:16.522441  367199 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 18:06:16.524454  367199 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 18:06:16.536402  367199 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 18:06:16.562644  367199 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 18:06:16.562800  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:16.562884  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=fdce3bf7146356e37c4eabb07ae105993e4520f9 minikube.k8s.io/name=default-k8s-diff-port-396551 minikube.k8s.io/updated_at=2024_02_16T18_06_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:16.865463  367199 ops.go:34] apiserver oom_adj: -16
	I0216 18:06:16.865558  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:17.365648  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:17.866201  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:18.365991  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:18.865721  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:19.366622  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:19.866150  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:20.365683  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:20.866467  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:21.365644  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:21.865671  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:22.365989  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:22.866418  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:23.366382  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:23.866470  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:24.366596  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:24.866167  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:25.366531  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:25.866256  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:26.366149  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:26.866261  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:27.365650  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:27.865631  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:28.365858  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:28.866219  367199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0216 18:06:28.970970  367199 kubeadm.go:1088] duration metric: took 12.408235831s to wait for elevateKubeSystemPrivileges.
	I0216 18:06:28.971000  367199 kubeadm.go:406] StartCluster complete in 5m31.173955774s
	I0216 18:06:28.971016  367199 settings.go:142] acquiring lock: {Name:mkb7d1073df18b92aae32c7933eb8e8868b57c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 18:06:28.971080  367199 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 18:06:28.971943  367199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/kubeconfig: {Name:mk22ab392afde309b066ab7073c4430ce25196e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 18:06:28.973274  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 18:06:28.973524  367199 config.go:182] Loaded profile config "default-k8s-diff-port-396551": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 18:06:28.973561  367199 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0216 18:06:28.973625  367199 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-396551"
	I0216 18:06:28.973645  367199 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-396551"
	W0216 18:06:28.973652  367199 addons.go:243] addon storage-provisioner should already be in state true
	I0216 18:06:28.973700  367199 host.go:66] Checking if "default-k8s-diff-port-396551" exists ...
	I0216 18:06:28.974105  367199 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-396551 --format={{.State.Status}}
	I0216 18:06:28.974695  367199 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-396551"
	I0216 18:06:28.974719  367199 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-396551"
	W0216 18:06:28.974726  367199 addons.go:243] addon dashboard should already be in state true
	I0216 18:06:28.974756  367199 host.go:66] Checking if "default-k8s-diff-port-396551" exists ...
	I0216 18:06:28.975134  367199 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-396551 --format={{.State.Status}}
	I0216 18:06:28.975916  367199 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-396551"
	I0216 18:06:28.975942  367199 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-396551"
	I0216 18:06:28.976000  367199 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-396551"
	I0216 18:06:28.976012  367199 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-396551"
	W0216 18:06:28.976018  367199 addons.go:243] addon metrics-server should already be in state true
	I0216 18:06:28.976047  367199 host.go:66] Checking if "default-k8s-diff-port-396551" exists ...
	I0216 18:06:28.976441  367199 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-396551 --format={{.State.Status}}
	I0216 18:06:28.976865  367199 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-396551 --format={{.State.Status}}
	I0216 18:06:29.026723  367199 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0216 18:06:29.029029  367199 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0216 18:06:29.034384  367199 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0216 18:06:29.034407  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0216 18:06:29.034476  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:06:29.049084  367199 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0216 18:06:29.053558  367199 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0216 18:06:29.053580  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0216 18:06:29.053649  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:06:29.064483  367199 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 18:06:29.066496  367199 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 18:06:29.066518  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 18:06:29.066587  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:06:29.085288  367199 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-396551"
	W0216 18:06:29.085313  367199 addons.go:243] addon default-storageclass should already be in state true
	I0216 18:06:29.085337  367199 host.go:66] Checking if "default-k8s-diff-port-396551" exists ...
	I0216 18:06:29.085771  367199 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-396551 --format={{.State.Status}}
	I0216 18:06:29.146210  367199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/default-k8s-diff-port-396551/id_rsa Username:docker}
	I0216 18:06:29.160852  367199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/default-k8s-diff-port-396551/id_rsa Username:docker}
	I0216 18:06:29.161756  367199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/default-k8s-diff-port-396551/id_rsa Username:docker}
	I0216 18:06:29.181980  367199 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 18:06:29.181999  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 18:06:29.182065  367199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-396551
	I0216 18:06:29.205617  367199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/default-k8s-diff-port-396551/id_rsa Username:docker}
	I0216 18:06:29.470035  367199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 18:06:29.477051  367199 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-396551" context rescaled to 1 replicas
	I0216 18:06:29.477137  367199 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 18:06:29.479683  367199 out.go:177] * Verifying Kubernetes components...
	I0216 18:06:29.481573  367199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 18:06:29.538370  367199 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0216 18:06:29.620914  367199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 18:06:29.652539  367199 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0216 18:06:29.652606  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0216 18:06:29.655137  367199 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0216 18:06:29.655207  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0216 18:06:29.863098  367199 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0216 18:06:29.863168  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0216 18:06:29.876922  367199 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0216 18:06:29.876991  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0216 18:06:30.030841  367199 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0216 18:06:30.030917  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0216 18:06:30.085460  367199 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 18:06:30.085493  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0216 18:06:30.360419  367199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 18:06:30.361133  367199 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0216 18:06:30.361188  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0216 18:06:30.457370  367199 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0216 18:06:30.457443  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0216 18:06:30.488194  367199 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0216 18:06:30.488263  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0216 18:06:30.567978  367199 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0216 18:06:30.568048  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0216 18:06:30.656786  367199 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0216 18:06:30.656856  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0216 18:06:30.696572  367199 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 18:06:30.696647  367199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0216 18:06:30.717815  367199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 18:06:31.305920  367199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.835803685s)
	I0216 18:06:31.306089  367199 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.824455137s)
	I0216 18:06:31.306163  367199 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-396551" to be "Ready" ...
	I0216 18:06:31.313252  367199 node_ready.go:49] node "default-k8s-diff-port-396551" has status "Ready":"True"
	I0216 18:06:31.313326  367199 node_ready.go:38] duration metric: took 7.133233ms waiting for node "default-k8s-diff-port-396551" to be "Ready" ...
	I0216 18:06:31.313352  367199 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 18:06:31.321613  367199 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9kzs2" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.328044  367199 pod_ready.go:92] pod "coredns-5dd5756b68-9kzs2" in "kube-system" namespace has status "Ready":"True"
	I0216 18:06:31.328113  367199 pod_ready.go:81] duration metric: took 6.42418ms waiting for pod "coredns-5dd5756b68-9kzs2" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.328139  367199 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j9vdm" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.334520  367199 pod_ready.go:92] pod "coredns-5dd5756b68-j9vdm" in "kube-system" namespace has status "Ready":"True"
	I0216 18:06:31.334587  367199 pod_ready.go:81] duration metric: took 6.418559ms waiting for pod "coredns-5dd5756b68-j9vdm" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.334612  367199 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.340373  367199 pod_ready.go:92] pod "etcd-default-k8s-diff-port-396551" in "kube-system" namespace has status "Ready":"True"
	I0216 18:06:31.340441  367199 pod_ready.go:81] duration metric: took 5.805984ms waiting for pod "etcd-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.340481  367199 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.347014  367199 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-396551" in "kube-system" namespace has status "Ready":"True"
	I0216 18:06:31.347084  367199 pod_ready.go:81] duration metric: took 6.578684ms waiting for pod "kube-apiserver-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.347112  367199 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.521021  367199 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.982617001s)
	I0216 18:06:31.521111  367199 start.go:929] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0216 18:06:31.734274  367199 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-396551" in "kube-system" namespace has status "Ready":"True"
	I0216 18:06:31.734307  367199 pod_ready.go:81] duration metric: took 387.157027ms waiting for pod "kube-controller-manager-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.734320  367199 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nrvwn" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:31.958583  367199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.337589796s)
	I0216 18:06:31.958703  367199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.598210702s)
	I0216 18:06:31.958777  367199 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-396551"
	I0216 18:06:32.110107  367199 pod_ready.go:92] pod "kube-proxy-nrvwn" in "kube-system" namespace has status "Ready":"True"
	I0216 18:06:32.110188  367199 pod_ready.go:81] duration metric: took 375.856421ms waiting for pod "kube-proxy-nrvwn" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:32.110217  367199 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:32.511899  367199 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-396551" in "kube-system" namespace has status "Ready":"True"
	I0216 18:06:32.511975  367199 pod_ready.go:81] duration metric: took 401.737833ms waiting for pod "kube-scheduler-default-k8s-diff-port-396551" in "kube-system" namespace to be "Ready" ...
	I0216 18:06:32.512008  367199 pod_ready.go:38] duration metric: took 1.198630573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0216 18:06:32.512038  367199 api_server.go:52] waiting for apiserver process to appear ...
	I0216 18:06:32.512132  367199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 18:06:32.852284  367199 api_server.go:72] duration metric: took 3.375101941s to wait for apiserver process to appear ...
	I0216 18:06:32.852356  367199 api_server.go:88] waiting for apiserver healthz status ...
	I0216 18:06:32.852390  367199 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0216 18:06:32.853135  367199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.135235938s)
	I0216 18:06:32.856720  367199 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-396551 addons enable metrics-server
	
	I0216 18:06:32.859064  367199 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0216 18:06:32.863056  367199 addons.go:505] enable addons completed in 3.889487649s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0216 18:06:32.863964  367199 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0216 18:06:32.865439  367199 api_server.go:141] control plane version: v1.28.4
	I0216 18:06:32.865466  367199 api_server.go:131] duration metric: took 13.090775ms to wait for apiserver health ...
	I0216 18:06:32.865476  367199 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 18:06:32.872121  367199 system_pods.go:59] 9 kube-system pods found
	I0216 18:06:32.872159  367199 system_pods.go:61] "coredns-5dd5756b68-9kzs2" [b8b16a57-501c-4488-ba87-dce6ff907658] Running
	I0216 18:06:32.872168  367199 system_pods.go:61] "coredns-5dd5756b68-j9vdm" [fde6120e-da8e-4edb-a5f7-2886d7e92b82] Running
	I0216 18:06:32.872174  367199 system_pods.go:61] "etcd-default-k8s-diff-port-396551" [f6684cf9-26fa-4ab3-b7ca-b0caa7a05c01] Running
	I0216 18:06:32.872182  367199 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-396551" [24d9d52c-9594-46c3-9031-7ef0f1284c69] Running
	I0216 18:06:32.872189  367199 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-396551" [3ba54399-1d8b-46a3-8fec-6847bfe4e6fd] Running
	I0216 18:06:32.872195  367199 system_pods.go:61] "kube-proxy-nrvwn" [acde4c5b-3cea-415d-9767-eb7259718b4f] Running
	I0216 18:06:32.872200  367199 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-396551" [361e5aef-6284-4624-9518-94a26741e7f7] Running
	I0216 18:06:32.872210  367199 system_pods.go:61] "metrics-server-57f55c9bc5-n6r6k" [0ad7ef49-3165-4611-9b57-f7e945646bd3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 18:06:32.872224  367199 system_pods.go:61] "storage-provisioner" [11bdf523-55d2-4145-b53d-0b644319dd23] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0216 18:06:32.872235  367199 system_pods.go:74] duration metric: took 6.752856ms to wait for pod list to return data ...
	I0216 18:06:32.872245  367199 default_sa.go:34] waiting for default service account to be created ...
	I0216 18:06:32.910383  367199 default_sa.go:45] found service account: "default"
	I0216 18:06:32.910411  367199 default_sa.go:55] duration metric: took 38.157248ms for default service account to be created ...
	I0216 18:06:32.910423  367199 system_pods.go:116] waiting for k8s-apps to be running ...
	I0216 18:06:33.115202  367199 system_pods.go:86] 9 kube-system pods found
	I0216 18:06:33.115236  367199 system_pods.go:89] "coredns-5dd5756b68-9kzs2" [b8b16a57-501c-4488-ba87-dce6ff907658] Running
	I0216 18:06:33.115244  367199 system_pods.go:89] "coredns-5dd5756b68-j9vdm" [fde6120e-da8e-4edb-a5f7-2886d7e92b82] Running
	I0216 18:06:33.115249  367199 system_pods.go:89] "etcd-default-k8s-diff-port-396551" [f6684cf9-26fa-4ab3-b7ca-b0caa7a05c01] Running
	I0216 18:06:33.115256  367199 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-396551" [24d9d52c-9594-46c3-9031-7ef0f1284c69] Running
	I0216 18:06:33.115263  367199 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-396551" [3ba54399-1d8b-46a3-8fec-6847bfe4e6fd] Running
	I0216 18:06:33.115268  367199 system_pods.go:89] "kube-proxy-nrvwn" [acde4c5b-3cea-415d-9767-eb7259718b4f] Running
	I0216 18:06:33.115274  367199 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-396551" [361e5aef-6284-4624-9518-94a26741e7f7] Running
	I0216 18:06:33.115290  367199 system_pods.go:89] "metrics-server-57f55c9bc5-n6r6k" [0ad7ef49-3165-4611-9b57-f7e945646bd3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 18:06:33.115303  367199 system_pods.go:89] "storage-provisioner" [11bdf523-55d2-4145-b53d-0b644319dd23] Running
	I0216 18:06:33.115311  367199 system_pods.go:126] duration metric: took 204.883028ms to wait for k8s-apps to be running ...
	I0216 18:06:33.115319  367199 system_svc.go:44] waiting for kubelet service to be running ....
	I0216 18:06:33.115382  367199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 18:06:33.131317  367199 system_svc.go:56] duration metric: took 15.977989ms WaitForService to wait for kubelet.
	I0216 18:06:33.131345  367199 kubeadm.go:581] duration metric: took 3.65416941s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0216 18:06:33.131365  367199 node_conditions.go:102] verifying NodePressure condition ...
	I0216 18:06:33.314008  367199 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0216 18:06:33.314047  367199 node_conditions.go:123] node cpu capacity is 2
	I0216 18:06:33.314061  367199 node_conditions.go:105] duration metric: took 182.690182ms to run NodePressure ...
	I0216 18:06:33.314073  367199 start.go:228] waiting for startup goroutines ...
	I0216 18:06:33.314079  367199 start.go:233] waiting for cluster config update ...
	I0216 18:06:33.314089  367199 start.go:242] writing updated cluster config ...
	I0216 18:06:33.314422  367199 ssh_runner.go:195] Run: rm -f paused
	I0216 18:06:33.382969  367199 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0216 18:06:33.385841  367199 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-396551" cluster and "default" namespace by default
	I0216 18:06:54.833058  345500 kubeadm.go:322] 
	I0216 18:06:54.833130  345500 kubeadm.go:322] Unfortunately, an error has occurred:
	I0216 18:06:54.833171  345500 kubeadm.go:322] 	timed out waiting for the condition
	I0216 18:06:54.833181  345500 kubeadm.go:322] 
	I0216 18:06:54.833214  345500 kubeadm.go:322] This error is likely caused by:
	I0216 18:06:54.833270  345500 kubeadm.go:322] 	- The kubelet is not running
	I0216 18:06:54.833421  345500 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0216 18:06:54.833437  345500 kubeadm.go:322] 
	I0216 18:06:54.833539  345500 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0216 18:06:54.833581  345500 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0216 18:06:54.833629  345500 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0216 18:06:54.833641  345500 kubeadm.go:322] 
	I0216 18:06:54.833754  345500 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0216 18:06:54.833858  345500 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0216 18:06:54.833940  345500 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0216 18:06:54.833998  345500 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0216 18:06:54.834075  345500 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0216 18:06:54.834107  345500 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0216 18:06:54.837607  345500 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0216 18:06:54.837754  345500 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0216 18:06:54.837991  345500 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0216 18:06:54.838112  345500 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0216 18:06:54.838197  345500 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0216 18:06:54.838259  345500 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0216 18:06:54.838325  345500 kubeadm.go:406] StartCluster complete in 12m31.378806665s
	I0216 18:06:54.838408  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0216 18:06:54.855047  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.855075  345500 logs.go:278] No container was found matching "kube-apiserver"
	I0216 18:06:54.855139  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0216 18:06:54.873327  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.873349  345500 logs.go:278] No container was found matching "etcd"
	I0216 18:06:54.873408  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0216 18:06:54.891087  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.891113  345500 logs.go:278] No container was found matching "coredns"
	I0216 18:06:54.891174  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0216 18:06:54.909506  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.909531  345500 logs.go:278] No container was found matching "kube-scheduler"
	I0216 18:06:54.909590  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0216 18:06:54.927178  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.927200  345500 logs.go:278] No container was found matching "kube-proxy"
	I0216 18:06:54.927262  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0216 18:06:54.945838  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.945860  345500 logs.go:278] No container was found matching "kube-controller-manager"
	I0216 18:06:54.945919  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0216 18:06:54.962777  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.962800  345500 logs.go:278] No container was found matching "kindnet"
	I0216 18:06:54.962864  345500 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0216 18:06:54.979838  345500 logs.go:276] 0 containers: []
	W0216 18:06:54.979859  345500 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0216 18:06:54.979871  345500 logs.go:123] Gathering logs for dmesg ...
	I0216 18:06:54.979884  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0216 18:06:55.000522  345500 logs.go:123] Gathering logs for describe nodes ...
	I0216 18:06:55.000555  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0216 18:06:55.080113  345500 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0216 18:06:55.080175  345500 logs.go:123] Gathering logs for Docker ...
	I0216 18:06:55.080201  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0216 18:06:55.100957  345500 logs.go:123] Gathering logs for container status ...
	I0216 18:06:55.100994  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0216 18:06:55.142764  345500 logs.go:123] Gathering logs for kubelet ...
	I0216 18:06:55.142792  345500 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0216 18:06:55.174050  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:33 old-k8s-version-488384 kubelet[10015]: E0216 18:06:33.601902   10015 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 18:06:55.177196  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:34 old-k8s-version-488384 kubelet[10015]: E0216 18:06:34.598572   10015 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 18:06:55.180199  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:35 old-k8s-version-488384 kubelet[10015]: E0216 18:06:35.599305   10015 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 18:06:55.185749  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:37 old-k8s-version-488384 kubelet[10015]: E0216 18:06:37.600892   10015 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 18:06:55.203743  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:45 old-k8s-version-488384 kubelet[10015]: E0216 18:06:45.599731   10015 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0216 18:06:55.206538  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:46 old-k8s-version-488384 kubelet[10015]: E0216 18:06:46.599194   10015 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0216 18:06:55.211346  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:48 old-k8s-version-488384 kubelet[10015]: E0216 18:06:48.598609   10015 pod_workers.go:191] Error syncing pod f5bdbfcedf2bccc429ba471ffe3804b7 ("etcd-old-k8s-version-488384_kube-system(f5bdbfcedf2bccc429ba471ffe3804b7)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0216 18:06:55.214175  345500 logs.go:138] Found kubelet problem: Feb 16 18:06:49 old-k8s-version-488384 kubelet[10015]: E0216 18:06:49.597666   10015 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0216 18:06:55.226222  345500 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0216 18:06:55.226277  345500 out.go:239] * 
	W0216 18:06:55.226473  345500 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 18:06:55.226511  345500 out.go:239] * 
	W0216 18:06:55.227563  345500 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0216 18:06:55.230540  345500 out.go:177] X Problems detected in kubelet:
	I0216 18:06:55.232806  345500 out.go:177]   Feb 16 18:06:33 old-k8s-version-488384 kubelet[10015]: E0216 18:06:33.601902   10015 pod_workers.go:191] Error syncing pod 9e914279041b7fec183513988f6a94cb ("kube-apiserver-old-k8s-version-488384_kube-system(9e914279041b7fec183513988f6a94cb)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0216 18:06:55.235603  345500 out.go:177]   Feb 16 18:06:34 old-k8s-version-488384 kubelet[10015]: E0216 18:06:34.598572   10015 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-488384_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0216 18:06:55.240627  345500 out.go:177]   Feb 16 18:06:35 old-k8s-version-488384 kubelet[10015]: E0216 18:06:35.599305   10015 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-488384_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0216 18:06:55.250380  345500 out.go:177] 
	W0216 18:06:55.259601  345500 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1053-aws
	DOCKER_VERSION: 25.0.3
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0216 18:06:55.259690  345500 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0216 18:06:55.259715  345500 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0216 18:06:55.268587  345500 out.go:177] 
	
	
	==> Docker <==
	Feb 16 17:54:21 old-k8s-version-488384 systemd[1]: Stopping Docker Application Container Engine...
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[810]: time="2024-02-16T17:54:21.022139788Z" level=info msg="Processing signal 'terminated'"
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[810]: time="2024-02-16T17:54:21.023967258Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[810]: time="2024-02-16T17:54:21.025177131Z" level=info msg="Daemon shutdown complete"
	Feb 16 17:54:21 old-k8s-version-488384 systemd[1]: docker.service: Deactivated successfully.
	Feb 16 17:54:21 old-k8s-version-488384 systemd[1]: Stopped Docker Application Container Engine.
	Feb 16 17:54:21 old-k8s-version-488384 systemd[1]: Starting Docker Application Container Engine...
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:21.091908374Z" level=info msg="Starting up"
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:21.112216660Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:21.993072535Z" level=info msg="Loading containers: start."
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.104735415Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.139242043Z" level=info msg="Loading containers: done."
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.150758417Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.150982673Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.178457118Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:54:22 old-k8s-version-488384 systemd[1]: Started Docker Application Container Engine.
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.179135039Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:58:43 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:58:43.813830352Z" level=info msg="ignoring event" container=5398ff3d2bdac036977aa16f6311ebd247a65deac8ff004d4b5ac20165adee5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:58:44 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:58:44.153229677Z" level=info msg="ignoring event" container=71fc2aebb1c88db8ea8bc9187cf6390ab6ec21bafa138ba3811cf26f20249b3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:58:44 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:58:44.266528429Z" level=info msg="ignoring event" container=8a6c99aed1c0b1834041c6a6bc1c178c6492f49ba85082b082db9cd21c820220 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:58:44 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:58:44.367700373Z" level=info msg="ignoring event" container=afdc289f5646d486711734d8f39eca3b87f3ecbdc9e9580dab74789f73becb83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 18:02:50 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T18:02:50.760970045Z" level=info msg="ignoring event" container=b2844fac18da06fd1c4486f4f70f77eb837917110f86429305aafab762d85004 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 18:02:50 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T18:02:50.852786686Z" level=info msg="ignoring event" container=6ef48fefbac89e361de0ab2058c5ae613d94843a40528f867feac7935f6f703e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 18:02:50 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T18:02:50.935585907Z" level=info msg="ignoring event" container=77ecde8670d9b6223c89fe2a85458af3a411f08224b994bcee4510fa265d0245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 18:02:51 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T18:02:51.021288966Z" level=info msg="ignoring event" container=8db807559780922fed33221f1377b2278e84a7bf519d6d793caf430b38794084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000736] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=00000000ea93a584{9p.inode} n=00000000ddff12b8
	[  +0.001060] FS-Cache: O-key=[8] '0461f10000000000'
	[  +0.000753] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000ea93a584{9p.inode} n=0000000015591770
	[  +0.001047] FS-Cache: N-key=[8] '0461f10000000000'
	[Feb16 16:51] FS-Cache: Duplicate cookie detected
	[  +0.000747] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000957] FS-Cache: O-cookie d=00000000ea93a584{9p.inode} n=000000006efb19ee
	[  +0.001084] FS-Cache: O-key=[8] '0361f10000000000'
	[  +0.000809] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001034] FS-Cache: N-cookie d=00000000ea93a584{9p.inode} n=00000000b472c289
	[  +0.001072] FS-Cache: N-key=[8] '0361f10000000000'
	[  +0.382339] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001082] FS-Cache: O-cookie d=00000000ea93a584{9p.inode} n=00000000f3dd8454
	[  +0.001083] FS-Cache: O-key=[8] '0661f10000000000'
	[  +0.000812] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=00000000ea93a584{9p.inode} n=0000000032d8be23
	[  +0.001050] FS-Cache: N-key=[8] '0661f10000000000'
	[Feb16 16:53] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb16 17:33] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.010301] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.007673] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.156648] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> kernel <==
	 18:06:56 up  1:49,  0 users,  load average: 1.15, 0.89, 1.57
	Linux old-k8s-version-488384 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.045383   10015 event.go:246] Unable to write event: 'Post https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events: dial tcp 192.168.67.2:8443: connect: connection refused' (may retry after sleeping)
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.086226   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.111368   10015 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-488384&limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.186402   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.286624   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.312097   10015 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.386881   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.487085   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.512683   10015 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.587260   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.687455   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.711684   10015 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.787783   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.887973   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.912043   10015 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-488384&limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:06:55 old-k8s-version-488384 kubelet[10015]: E0216 18:06:55.988151   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:56 old-k8s-version-488384 kubelet[10015]: E0216 18:06:56.088346   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:56 old-k8s-version-488384 kubelet[10015]: E0216 18:06:56.112070   10015 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-488384&limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:06:56 old-k8s-version-488384 kubelet[10015]: E0216 18:06:56.188576   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:56 old-k8s-version-488384 kubelet[10015]: E0216 18:06:56.288752   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:56 old-k8s-version-488384 kubelet[10015]: E0216 18:06:56.312860   10015 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:06:56 old-k8s-version-488384 kubelet[10015]: E0216 18:06:56.388921   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:56 old-k8s-version-488384 kubelet[10015]: E0216 18:06:56.489092   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:06:56 old-k8s-version-488384 kubelet[10015]: E0216 18:06:56.513756   10015 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:06:56 old-k8s-version-488384 kubelet[10015]: E0216 18:06:56.589298   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-488384 -n old-k8s-version-488384
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-488384 -n old-k8s-version-488384: exit status 2 (501.243288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-488384" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (767.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (410.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:07:40.171058    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:08:09.424054    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:08:21.177755    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:09:50.355119    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:09:55.530208    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:10:11.567352    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:10:23.334603    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:10:29.360692    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
E0216 18:10:29.365970    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
E0216 18:10:29.376214    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
E0216 18:10:29.396493    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
E0216 18:10:29.436891    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
E0216 18:10:29.517157    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
E0216 18:10:29.677557    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
E0216 18:10:29.998025    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:10:30.638724    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:10:31.918950    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:10:34.479628    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:10:39.600138    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:10:49.841003    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:10:56.302920    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:11:10.321579    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:11:21.543167    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:11:31.316690    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:11:42.639755    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:11:51.282670    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:12:03.077925    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:12:40.171670    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:12:44.586789    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:12:53.404546    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:13:09.423243    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:13:13.203260    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/default-k8s-diff-port-396551/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:13:21.178023    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
E0216 18:13:37.211121    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.67.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.67.2:8443: connect: connection refused
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-488384 -n old-k8s-version-488384
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-488384 -n old-k8s-version-488384: exit status 2 (292.861651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-488384" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-488384
helpers_test.go:235: (dbg) docker inspect old-k8s-version-488384:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d",
	        "Created": "2024-02-16T17:43:51.781636674Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 345680,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-16T17:54:09.894404762Z",
	            "FinishedAt": "2024-02-16T17:54:08.299046886Z"
	        },
	        "Image": "sha256:83c2925cbfe44b87b5c0672f16807927d87d8625e89de4dc154c45daaaa04b5b",
	        "ResolvConfPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/hosts",
	        "LogPath": "/var/lib/docker/containers/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d/2ad7a05058fe2549cddf95871269dd919b02c087d6bc4ea6c1c641b232f4238d-json.log",
	        "Name": "/old-k8s-version-488384",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-488384:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-488384",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621-init/diff:/var/lib/docker/overlay2/946a7b4f2791bd4745aa26fd1fdd5eefb03c154f3c1fd517458d1937bbb85039/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7434aa8b437605996aed483edf7d3b633bc7acee8630f474670417e40cd0e621/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-488384",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-488384/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-488384",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-488384",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c66234ad4617bc50a90452cc97feb6068a7da7d63af736570cfde4ddcd6338c7",
	            "SandboxKey": "/var/run/docker/netns/c66234ad4617",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-488384": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2ad7a05058fe",
	                        "old-k8s-version-488384"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "066a8ce33ebb9a8bf9130346706c7668acc42f9f2a9352243a5b99995ed10eb4",
	                    "EndpointID": "d450be87f3ed3d1e3561d8cc627e39f3e3bcf740069efe096870713ebb0ad0af",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-488384",
	                        "2ad7a05058fe"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384: exit status 2 (296.568596ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-488384 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:58 UTC | 16 Feb 24 17:59 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:59 UTC | 16 Feb 24 17:59 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:59 UTC | 16 Feb 24 17:59 UTC |
	| delete  | -p embed-certs-198397                                  | embed-certs-198397           | jenkins | v1.32.0 | 16 Feb 24 17:59 UTC | 16 Feb 24 17:59 UTC |
	| delete  | -p                                                     | disable-driver-mounts-083322 | jenkins | v1.32.0 | 16 Feb 24 17:59 UTC | 16 Feb 24 17:59 UTC |
	|         | disable-driver-mounts-083322                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 17:59 UTC | 16 Feb 24 18:00 UTC |
	|         | default-k8s-diff-port-396551                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-396551  | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:00 UTC | 16 Feb 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:00 UTC | 16 Feb 24 18:00 UTC |
	|         | default-k8s-diff-port-396551                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-396551       | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:00 UTC | 16 Feb 24 18:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:00 UTC | 16 Feb 24 18:06 UTC |
	|         | default-k8s-diff-port-396551                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-396551                           | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:06 UTC | 16 Feb 24 18:06 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:06 UTC | 16 Feb 24 18:06 UTC |
	|         | default-k8s-diff-port-396551                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:06 UTC | 16 Feb 24 18:06 UTC |
	|         | default-k8s-diff-port-396551                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:07 UTC | 16 Feb 24 18:07 UTC |
	|         | default-k8s-diff-port-396551                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-396551 | jenkins | v1.32.0 | 16 Feb 24 18:07 UTC | 16 Feb 24 18:07 UTC |
	|         | default-k8s-diff-port-396551                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-474812 --memory=2200 --alsologtostderr   | newest-cni-474812            | jenkins | v1.32.0 | 16 Feb 24 18:07 UTC | 16 Feb 24 18:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-474812             | newest-cni-474812            | jenkins | v1.32.0 | 16 Feb 24 18:07 UTC | 16 Feb 24 18:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-474812                                   | newest-cni-474812            | jenkins | v1.32.0 | 16 Feb 24 18:07 UTC | 16 Feb 24 18:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-474812                  | newest-cni-474812            | jenkins | v1.32.0 | 16 Feb 24 18:07 UTC | 16 Feb 24 18:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-474812 --memory=2200 --alsologtostderr   | newest-cni-474812            | jenkins | v1.32.0 | 16 Feb 24 18:07 UTC | 16 Feb 24 18:08 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=docker            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-474812 image list                           | newest-cni-474812            | jenkins | v1.32.0 | 16 Feb 24 18:08 UTC | 16 Feb 24 18:08 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-474812                                   | newest-cni-474812            | jenkins | v1.32.0 | 16 Feb 24 18:08 UTC | 16 Feb 24 18:08 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-474812                                   | newest-cni-474812            | jenkins | v1.32.0 | 16 Feb 24 18:08 UTC | 16 Feb 24 18:08 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-474812                                   | newest-cni-474812            | jenkins | v1.32.0 | 16 Feb 24 18:08 UTC | 16 Feb 24 18:08 UTC |
	| delete  | -p newest-cni-474812                                   | newest-cni-474812            | jenkins | v1.32.0 | 16 Feb 24 18:08 UTC | 16 Feb 24 18:08 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 18:07:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 18:07:59.532490  388892 out.go:291] Setting OutFile to fd 1 ...
	I0216 18:07:59.532709  388892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 18:07:59.532718  388892 out.go:304] Setting ErrFile to fd 2...
	I0216 18:07:59.532725  388892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 18:07:59.532962  388892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 18:07:59.533320  388892 out.go:298] Setting JSON to false
	I0216 18:07:59.534192  388892 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6629,"bootTime":1708100250,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 18:07:59.534262  388892 start.go:139] virtualization:  
	I0216 18:07:59.536584  388892 out.go:177] * [newest-cni-474812] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 18:07:59.539516  388892 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 18:07:59.541103  388892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 18:07:59.539689  388892 notify.go:220] Checking for updates...
	I0216 18:07:59.544790  388892 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 18:07:59.546465  388892 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 18:07:59.548079  388892 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0216 18:07:59.549964  388892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 18:07:59.552021  388892 config.go:182] Loaded profile config "newest-cni-474812": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 18:07:59.552593  388892 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 18:07:59.575580  388892 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 18:07:59.575693  388892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 18:07:59.657295  388892 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 18:07:59.647305811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 18:07:59.657412  388892 docker.go:295] overlay module found
	I0216 18:07:59.659420  388892 out.go:177] * Using the docker driver based on existing profile
	I0216 18:07:59.661582  388892 start.go:299] selected driver: docker
	I0216 18:07:59.661599  388892 start.go:903] validating driver "docker" against &{Name:newest-cni-474812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-474812 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 18:07:59.661690  388892 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 18:07:59.662314  388892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 18:07:59.719390  388892 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 18:07:59.710342802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 18:07:59.719745  388892 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0216 18:07:59.719816  388892 cni.go:84] Creating CNI manager for ""
	I0216 18:07:59.719836  388892 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 18:07:59.719848  388892 start_flags.go:323] config:
	{Name:newest-cni-474812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-474812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 18:07:59.721974  388892 out.go:177] * Starting control plane node newest-cni-474812 in cluster newest-cni-474812
	I0216 18:07:59.723874  388892 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 18:07:59.725545  388892 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
	I0216 18:07:59.727189  388892 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 18:07:59.727242  388892 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0216 18:07:59.727260  388892 cache.go:56] Caching tarball of preloaded images
	I0216 18:07:59.727284  388892 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 18:07:59.727344  388892 preload.go:174] Found /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0216 18:07:59.727354  388892 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0216 18:07:59.727469  388892 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/newest-cni-474812/config.json ...
	I0216 18:07:59.743093  388892 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
	I0216 18:07:59.743119  388892 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
	I0216 18:07:59.743139  388892 cache.go:194] Successfully downloaded all kic artifacts
	I0216 18:07:59.743169  388892 start.go:365] acquiring machines lock for newest-cni-474812: {Name:mkcfa8c5a7f6663d71fc2eed4aee956856b6fed5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0216 18:07:59.743247  388892 start.go:369] acquired machines lock for "newest-cni-474812" in 49.608µs
	I0216 18:07:59.743270  388892 start.go:96] Skipping create...Using existing machine configuration
	I0216 18:07:59.743288  388892 fix.go:54] fixHost starting: 
	I0216 18:07:59.743563  388892 cli_runner.go:164] Run: docker container inspect newest-cni-474812 --format={{.State.Status}}
	I0216 18:07:59.762548  388892 fix.go:102] recreateIfNeeded on newest-cni-474812: state=Stopped err=<nil>
	W0216 18:07:59.762576  388892 fix.go:128] unexpected machine state, will restart: <nil>
	I0216 18:07:59.764758  388892 out.go:177] * Restarting existing docker container for "newest-cni-474812" ...
	I0216 18:07:59.766789  388892 cli_runner.go:164] Run: docker start newest-cni-474812
	I0216 18:08:00.106696  388892 cli_runner.go:164] Run: docker container inspect newest-cni-474812 --format={{.State.Status}}
	I0216 18:08:00.138964  388892 kic.go:430] container "newest-cni-474812" state is running.
	I0216 18:08:00.139510  388892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474812
	I0216 18:08:00.167236  388892 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/newest-cni-474812/config.json ...
	I0216 18:08:00.167508  388892 machine.go:88] provisioning docker machine ...
	I0216 18:08:00.167536  388892 ubuntu.go:169] provisioning hostname "newest-cni-474812"
	I0216 18:08:00.167598  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:00.191616  388892 main.go:141] libmachine: Using SSH client type: native
	I0216 18:08:00.192066  388892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 18:08:00.192082  388892 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-474812 && echo "newest-cni-474812" | sudo tee /etc/hostname
	I0216 18:08:00.192863  388892 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0216 18:08:03.344694  388892 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-474812
	
	I0216 18:08:03.344777  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:03.361187  388892 main.go:141] libmachine: Using SSH client type: native
	I0216 18:08:03.361598  388892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 18:08:03.361630  388892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-474812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-474812/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-474812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0216 18:08:03.500559  388892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 18:08:03.500598  388892 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-2208/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-2208/.minikube}
	I0216 18:08:03.500627  388892 ubuntu.go:177] setting up certificates
	I0216 18:08:03.500664  388892 provision.go:83] configureAuth start
	I0216 18:08:03.500727  388892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474812
	I0216 18:08:03.517115  388892 provision.go:138] copyHostCerts
	I0216 18:08:03.517183  388892 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem, removing ...
	I0216 18:08:03.517195  388892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem
	I0216 18:08:03.517272  388892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/key.pem (1675 bytes)
	I0216 18:08:03.517376  388892 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem, removing ...
	I0216 18:08:03.517387  388892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem
	I0216 18:08:03.517417  388892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/ca.pem (1078 bytes)
	I0216 18:08:03.517477  388892 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem, removing ...
	I0216 18:08:03.517486  388892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem
	I0216 18:08:03.517512  388892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-2208/.minikube/cert.pem (1123 bytes)
	I0216 18:08:03.517557  388892 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem org=jenkins.newest-cni-474812 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-474812]
	I0216 18:08:04.384690  388892 provision.go:172] copyRemoteCerts
	I0216 18:08:04.384756  388892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0216 18:08:04.384814  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:04.403507  388892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/newest-cni-474812/id_rsa Username:docker}
	I0216 18:08:04.509598  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0216 18:08:04.534012  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0216 18:08:04.560366  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0216 18:08:04.585066  388892 provision.go:86] duration metric: configureAuth took 1.084385674s
	I0216 18:08:04.585091  388892 ubuntu.go:193] setting minikube options for container-runtime
	I0216 18:08:04.585305  388892 config.go:182] Loaded profile config "newest-cni-474812": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 18:08:04.585369  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:04.600982  388892 main.go:141] libmachine: Using SSH client type: native
	I0216 18:08:04.601390  388892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 18:08:04.601407  388892 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0216 18:08:04.741146  388892 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0216 18:08:04.741168  388892 ubuntu.go:71] root file system type: overlay
	I0216 18:08:04.741327  388892 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0216 18:08:04.741402  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:04.763510  388892 main.go:141] libmachine: Using SSH client type: native
	I0216 18:08:04.763913  388892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 18:08:04.764001  388892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0216 18:08:04.917081  388892 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0216 18:08:04.917162  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:04.933548  388892 main.go:141] libmachine: Using SSH client type: native
	I0216 18:08:04.933957  388892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 33102 <nil> <nil>}
	I0216 18:08:04.933988  388892 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0216 18:08:05.078299  388892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0216 18:08:05.078345  388892 machine.go:91] provisioned docker machine in 4.910817574s
	I0216 18:08:05.078384  388892 start.go:300] post-start starting for "newest-cni-474812" (driver="docker")
	I0216 18:08:05.078406  388892 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0216 18:08:05.078478  388892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0216 18:08:05.078520  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:05.096818  388892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/newest-cni-474812/id_rsa Username:docker}
	I0216 18:08:05.197843  388892 ssh_runner.go:195] Run: cat /etc/os-release
	I0216 18:08:05.201159  388892 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0216 18:08:05.201197  388892 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0216 18:08:05.201210  388892 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0216 18:08:05.201222  388892 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0216 18:08:05.201232  388892 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/addons for local assets ...
	I0216 18:08:05.201292  388892 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-2208/.minikube/files for local assets ...
	I0216 18:08:05.201369  388892 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem -> 75132.pem in /etc/ssl/certs
	I0216 18:08:05.201474  388892 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0216 18:08:05.209757  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /etc/ssl/certs/75132.pem (1708 bytes)
	I0216 18:08:05.234575  388892 start.go:303] post-start completed in 156.168089ms
	I0216 18:08:05.234664  388892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 18:08:05.234711  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:05.250446  388892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/newest-cni-474812/id_rsa Username:docker}
	I0216 18:08:05.345515  388892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0216 18:08:05.349869  388892 fix.go:56] fixHost completed within 5.606582692s
	I0216 18:08:05.349937  388892 start.go:83] releasing machines lock for "newest-cni-474812", held for 5.606677724s
	I0216 18:08:05.350032  388892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474812
	I0216 18:08:05.366255  388892 ssh_runner.go:195] Run: cat /version.json
	I0216 18:08:05.366313  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:05.366416  388892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0216 18:08:05.366483  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:05.389656  388892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/newest-cni-474812/id_rsa Username:docker}
	I0216 18:08:05.399726  388892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/newest-cni-474812/id_rsa Username:docker}
	I0216 18:08:05.488275  388892 ssh_runner.go:195] Run: systemctl --version
	I0216 18:08:05.628488  388892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0216 18:08:05.632717  388892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0216 18:08:05.650718  388892 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0216 18:08:05.650846  388892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0216 18:08:05.659614  388892 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0216 18:08:05.659681  388892 start.go:475] detecting cgroup driver to use...
	I0216 18:08:05.659717  388892 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 18:08:05.659821  388892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 18:08:05.676683  388892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0216 18:08:05.686692  388892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0216 18:08:05.696523  388892 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0216 18:08:05.696600  388892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0216 18:08:05.706155  388892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 18:08:05.716301  388892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0216 18:08:05.726061  388892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0216 18:08:05.735645  388892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0216 18:08:05.744771  388892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0216 18:08:05.754557  388892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0216 18:08:05.763190  388892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0216 18:08:05.775022  388892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 18:08:05.854515  388892 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0216 18:08:05.961856  388892 start.go:475] detecting cgroup driver to use...
	I0216 18:08:05.961897  388892 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0216 18:08:05.961951  388892 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0216 18:08:05.975745  388892 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0216 18:08:05.975814  388892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0216 18:08:05.989381  388892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0216 18:08:06.009431  388892 ssh_runner.go:195] Run: which cri-dockerd
	I0216 18:08:06.013589  388892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0216 18:08:06.025085  388892 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0216 18:08:06.051682  388892 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0216 18:08:06.154584  388892 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0216 18:08:06.260022  388892 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0216 18:08:06.260168  388892 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0216 18:08:06.282242  388892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 18:08:06.373997  388892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0216 18:08:06.745137  388892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0216 18:08:06.757071  388892 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0216 18:08:06.770873  388892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 18:08:06.783039  388892 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0216 18:08:06.884009  388892 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0216 18:08:06.985042  388892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 18:08:07.075876  388892 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0216 18:08:07.089616  388892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0216 18:08:07.100804  388892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0216 18:08:07.190789  388892 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0216 18:08:07.273005  388892 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0216 18:08:07.273120  388892 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0216 18:08:07.276963  388892 start.go:543] Will wait 60s for crictl version
	I0216 18:08:07.277066  388892 ssh_runner.go:195] Run: which crictl
	I0216 18:08:07.282279  388892 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0216 18:08:07.332100  388892 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0216 18:08:07.332241  388892 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 18:08:07.354438  388892 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0216 18:08:07.381190  388892 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 25.0.3 ...
	I0216 18:08:07.381309  388892 cli_runner.go:164] Run: docker network inspect newest-cni-474812 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0216 18:08:07.397091  388892 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0216 18:08:07.400676  388892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 18:08:07.414248  388892 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0216 18:08:07.416082  388892 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 18:08:07.416177  388892 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 18:08:07.434216  388892 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0216 18:08:07.434241  388892 docker.go:615] Images already preloaded, skipping extraction
	I0216 18:08:07.434303  388892 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0216 18:08:07.451593  388892 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0216 18:08:07.451615  388892 cache_images.go:84] Images are preloaded, skipping loading
	I0216 18:08:07.451685  388892 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0216 18:08:07.502199  388892 cni.go:84] Creating CNI manager for ""
	I0216 18:08:07.502228  388892 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 18:08:07.502245  388892 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0216 18:08:07.502265  388892 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-474812 NodeName:newest-cni-474812 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0216 18:08:07.502423  388892 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-474812"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0216 18:08:07.502502  388892 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-474812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-474812 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0216 18:08:07.502582  388892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0216 18:08:07.511559  388892 binaries.go:44] Found k8s binaries, skipping transfer
	I0216 18:08:07.511632  388892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0216 18:08:07.521192  388892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0216 18:08:07.539890  388892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0216 18:08:07.558101  388892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0216 18:08:07.576234  388892 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0216 18:08:07.579912  388892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0216 18:08:07.591073  388892 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/newest-cni-474812 for IP: 192.168.76.2
	I0216 18:08:07.591104  388892 certs.go:190] acquiring lock for shared ca certs: {Name:mkc4dfb4b2b1da0d6a80fb9567025307b764443b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 18:08:07.591232  388892 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key
	I0216 18:08:07.591290  388892 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key
	I0216 18:08:07.591379  388892 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/newest-cni-474812/client.key
	I0216 18:08:07.591446  388892 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/newest-cni-474812/apiserver.key.31bdca25
	I0216 18:08:07.591493  388892 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/newest-cni-474812/proxy-client.key
	I0216 18:08:07.591605  388892 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem (1338 bytes)
	W0216 18:08:07.591637  388892 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513_empty.pem, impossibly tiny 0 bytes
	I0216 18:08:07.591650  388892 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca-key.pem (1679 bytes)
	I0216 18:08:07.591680  388892 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/ca.pem (1078 bytes)
	I0216 18:08:07.591710  388892 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/cert.pem (1123 bytes)
	I0216 18:08:07.591737  388892 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/certs/home/jenkins/minikube-integration/17936-2208/.minikube/certs/key.pem (1675 bytes)
	I0216 18:08:07.591790  388892 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem (1708 bytes)
	I0216 18:08:07.592399  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/newest-cni-474812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0216 18:08:07.617123  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/newest-cni-474812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0216 18:08:07.641864  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/newest-cni-474812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0216 18:08:07.665762  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/newest-cni-474812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0216 18:08:07.689446  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0216 18:08:07.713364  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0216 18:08:07.736836  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0216 18:08:07.760434  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0216 18:08:07.783859  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0216 18:08:07.807970  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/certs/7513.pem --> /usr/share/ca-certificates/7513.pem (1338 bytes)
	I0216 18:08:07.831133  388892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/ssl/certs/75132.pem --> /usr/share/ca-certificates/75132.pem (1708 bytes)
	I0216 18:08:07.855609  388892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0216 18:08:07.873803  388892 ssh_runner.go:195] Run: openssl version
	I0216 18:08:07.879586  388892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75132.pem && ln -fs /usr/share/ca-certificates/75132.pem /etc/ssl/certs/75132.pem"
	I0216 18:08:07.889230  388892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75132.pem
	I0216 18:08:07.892770  388892 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:48 /usr/share/ca-certificates/75132.pem
	I0216 18:08:07.892842  388892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75132.pem
	I0216 18:08:07.899831  388892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75132.pem /etc/ssl/certs/3ec20f2e.0"
	I0216 18:08:07.909219  388892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0216 18:08:07.919148  388892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0216 18:08:07.922959  388892 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:42 /usr/share/ca-certificates/minikubeCA.pem
	I0216 18:08:07.923033  388892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0216 18:08:07.930186  388892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0216 18:08:07.939440  388892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7513.pem && ln -fs /usr/share/ca-certificates/7513.pem /etc/ssl/certs/7513.pem"
	I0216 18:08:07.949022  388892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7513.pem
	I0216 18:08:07.952482  388892 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:48 /usr/share/ca-certificates/7513.pem
	I0216 18:08:07.952558  388892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7513.pem
	I0216 18:08:07.959324  388892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7513.pem /etc/ssl/certs/51391683.0"
	I0216 18:08:07.968425  388892 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0216 18:08:07.972017  388892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0216 18:08:07.979018  388892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0216 18:08:07.985968  388892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0216 18:08:07.993090  388892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0216 18:08:08.000140  388892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0216 18:08:08.008363  388892 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0216 18:08:08.015803  388892 kubeadm.go:404] StartCluster: {Name:newest-cni-474812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-474812 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 18:08:08.015958  388892 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 18:08:08.033480  388892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0216 18:08:08.042642  388892 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0216 18:08:08.042665  388892 kubeadm.go:636] restartCluster start
	I0216 18:08:08.042755  388892 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0216 18:08:08.051600  388892 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:08.052201  388892 kubeconfig.go:135] verify returned: extract IP: "newest-cni-474812" does not appear in /home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 18:08:08.052420  388892 kubeconfig.go:146] "newest-cni-474812" context is missing from /home/jenkins/minikube-integration/17936-2208/kubeconfig - will repair!
	I0216 18:08:08.052992  388892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/kubeconfig: {Name:mk22ab392afde309b066ab7073c4430ce25196e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 18:08:08.054567  388892 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0216 18:08:08.063988  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:08.064066  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:08.074451  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:08.564055  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:08.564144  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:08.574160  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:09.064787  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:09.064894  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:09.075603  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:09.564593  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:09.564722  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:09.574960  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:10.064484  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:10.064626  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:10.075119  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:10.564699  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:10.564780  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:10.574749  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:11.064345  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:11.064452  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:11.074802  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:11.564138  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:11.564247  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:11.574455  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:12.065001  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:12.065127  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:12.075608  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:12.564135  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:12.564239  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:12.574594  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:13.064190  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:13.064292  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:13.075130  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:13.564674  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:13.564782  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:13.574615  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:14.064207  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:14.064310  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:14.074610  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:14.564893  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:14.565002  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:14.575445  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:15.064069  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:15.064168  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:15.076290  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:15.564901  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:15.565002  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:15.574904  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:16.064121  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:16.064225  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:16.074456  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:16.564057  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:16.564154  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:16.574243  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:17.064880  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:17.064981  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:17.074884  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:17.564488  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:17.564575  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:17.574639  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:18.064459  388892 api_server.go:166] Checking apiserver status ...
	I0216 18:08:18.064548  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0216 18:08:18.075939  388892 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:18.075972  388892 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0216 18:08:18.075983  388892 kubeadm.go:1135] stopping kube-system containers ...
	I0216 18:08:18.076064  388892 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0216 18:08:18.097118  388892 docker.go:483] Stopping containers: [b087d49a7b6a b7c8ae33571f 4a8d8f36fa85 674efd6ef36f a170b2e6355d b9728d5e3a4a 33d940bf8e65 e2920202677d 8effb320911f e21fb4f4db57 67cc9bce62bc 640e158690fc 02fc8a3cac54 8cfbfbbcc249 9873e1876093]
	I0216 18:08:18.097235  388892 ssh_runner.go:195] Run: docker stop b087d49a7b6a b7c8ae33571f 4a8d8f36fa85 674efd6ef36f a170b2e6355d b9728d5e3a4a 33d940bf8e65 e2920202677d 8effb320911f e21fb4f4db57 67cc9bce62bc 640e158690fc 02fc8a3cac54 8cfbfbbcc249 9873e1876093
	I0216 18:08:18.119251  388892 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0216 18:08:18.132363  388892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0216 18:08:18.141627  388892 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5651 Feb 16 18:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 16 18:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 16 18:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 16 18:07 /etc/kubernetes/scheduler.conf
	
	I0216 18:08:18.141699  388892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0216 18:08:18.150811  388892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0216 18:08:18.159684  388892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0216 18:08:18.168418  388892 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:18.168484  388892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0216 18:08:18.177270  388892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0216 18:08:18.185984  388892 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0216 18:08:18.186098  388892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0216 18:08:18.194470  388892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0216 18:08:18.203548  388892 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0216 18:08:18.203572  388892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:08:18.258971  388892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:08:21.334412  388892 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.075407175s)
	I0216 18:08:21.334443  388892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:08:21.485168  388892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:08:21.547204  388892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:08:21.659126  388892 api_server.go:52] waiting for apiserver process to appear ...
	I0216 18:08:21.659197  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 18:08:22.159544  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 18:08:22.660265  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 18:08:22.673106  388892 api_server.go:72] duration metric: took 1.013979246s to wait for apiserver process to appear ...
	I0216 18:08:22.673129  388892 api_server.go:88] waiting for apiserver healthz status ...
	I0216 18:08:22.673148  388892 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 18:08:22.673452  388892 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0216 18:08:23.173786  388892 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 18:08:27.414051  388892 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 18:08:27.414101  388892 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 18:08:27.414115  388892 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 18:08:27.552186  388892 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0216 18:08:27.552218  388892 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0216 18:08:27.673502  388892 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 18:08:27.712319  388892 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 18:08:27.712354  388892 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 18:08:28.174040  388892 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 18:08:28.183362  388892 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0216 18:08:28.183394  388892 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0216 18:08:28.673787  388892 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 18:08:28.685889  388892 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0216 18:08:28.703149  388892 api_server.go:141] control plane version: v1.29.0-rc.2
	I0216 18:08:28.703182  388892 api_server.go:131] duration metric: took 6.030046098s to wait for apiserver health ...
	I0216 18:08:28.703193  388892 cni.go:84] Creating CNI manager for ""
	I0216 18:08:28.703206  388892 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 18:08:28.706819  388892 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0216 18:08:28.709424  388892 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0216 18:08:28.742115  388892 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0216 18:08:28.773462  388892 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 18:08:28.787330  388892 system_pods.go:59] 8 kube-system pods found
	I0216 18:08:28.787369  388892 system_pods.go:61] "coredns-76f75df574-6brvg" [ec775320-4a0f-4371-a281-6ae1c5f11672] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0216 18:08:28.787380  388892 system_pods.go:61] "etcd-newest-cni-474812" [79f97b3e-5e0f-4771-ae3c-a0a0c91c8fb0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 18:08:28.787392  388892 system_pods.go:61] "kube-apiserver-newest-cni-474812" [cc08aac9-970a-4636-839e-4507caa4ac60] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 18:08:28.787399  388892 system_pods.go:61] "kube-controller-manager-newest-cni-474812" [19716706-5959-4463-8a68-97c39d9fcfe9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 18:08:28.787416  388892 system_pods.go:61] "kube-proxy-bnhjb" [bea1e721-6b2e-421a-babb-d106a0671922] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0216 18:08:28.787429  388892 system_pods.go:61] "kube-scheduler-newest-cni-474812" [559c2a84-916d-4f2d-9d15-de8a2df03031] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 18:08:28.787437  388892 system_pods.go:61] "metrics-server-57f55c9bc5-t4nxc" [fcf93eef-5046-4db0-9793-77631698618e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 18:08:28.787451  388892 system_pods.go:61] "storage-provisioner" [c2420002-7da3-4a09-9256-f0ae26a74655] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0216 18:08:28.787467  388892 system_pods.go:74] duration metric: took 13.975747ms to wait for pod list to return data ...
	I0216 18:08:28.787474  388892 node_conditions.go:102] verifying NodePressure condition ...
	I0216 18:08:28.793131  388892 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0216 18:08:28.793177  388892 node_conditions.go:123] node cpu capacity is 2
	I0216 18:08:28.793189  388892 node_conditions.go:105] duration metric: took 5.701154ms to run NodePressure ...
	I0216 18:08:28.793208  388892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0216 18:08:29.291697  388892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0216 18:08:29.310537  388892 ops.go:34] apiserver oom_adj: -16
	I0216 18:08:29.310606  388892 kubeadm.go:640] restartCluster took 21.267932345s
	I0216 18:08:29.310636  388892 kubeadm.go:406] StartCluster complete in 21.294835156s
	I0216 18:08:29.310683  388892 settings.go:142] acquiring lock: {Name:mkb7d1073df18b92aae32c7933eb8e8868b57c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 18:08:29.310792  388892 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 18:08:29.311751  388892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-2208/kubeconfig: {Name:mk22ab392afde309b066ab7073c4430ce25196e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0216 18:08:29.312072  388892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0216 18:08:29.312395  388892 config.go:182] Loaded profile config "newest-cni-474812": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0216 18:08:29.312431  388892 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0216 18:08:29.312497  388892 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-474812"
	I0216 18:08:29.312511  388892 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-474812"
	W0216 18:08:29.312516  388892 addons.go:243] addon storage-provisioner should already be in state true
	I0216 18:08:29.312546  388892 host.go:66] Checking if "newest-cni-474812" exists ...
	I0216 18:08:29.313161  388892 cli_runner.go:164] Run: docker container inspect newest-cni-474812 --format={{.State.Status}}
	I0216 18:08:29.314436  388892 addons.go:69] Setting default-storageclass=true in profile "newest-cni-474812"
	I0216 18:08:29.314501  388892 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-474812"
	I0216 18:08:29.314860  388892 cli_runner.go:164] Run: docker container inspect newest-cni-474812 --format={{.State.Status}}
	I0216 18:08:29.315245  388892 addons.go:69] Setting metrics-server=true in profile "newest-cni-474812"
	I0216 18:08:29.315285  388892 addons.go:234] Setting addon metrics-server=true in "newest-cni-474812"
	W0216 18:08:29.315320  388892 addons.go:243] addon metrics-server should already be in state true
	I0216 18:08:29.315377  388892 host.go:66] Checking if "newest-cni-474812" exists ...
	I0216 18:08:29.315799  388892 cli_runner.go:164] Run: docker container inspect newest-cni-474812 --format={{.State.Status}}
	I0216 18:08:29.320780  388892 addons.go:69] Setting dashboard=true in profile "newest-cni-474812"
	I0216 18:08:29.320811  388892 addons.go:234] Setting addon dashboard=true in "newest-cni-474812"
	W0216 18:08:29.320818  388892 addons.go:243] addon dashboard should already be in state true
	I0216 18:08:29.320857  388892 host.go:66] Checking if "newest-cni-474812" exists ...
	I0216 18:08:29.321346  388892 cli_runner.go:164] Run: docker container inspect newest-cni-474812 --format={{.State.Status}}
	I0216 18:08:29.350677  388892 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-474812" context rescaled to 1 replicas
	I0216 18:08:29.350727  388892 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0216 18:08:29.359161  388892 out.go:177] * Verifying Kubernetes components...
	I0216 18:08:29.369946  388892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 18:08:29.381763  388892 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0216 18:08:29.383550  388892 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 18:08:29.383566  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0216 18:08:29.383632  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:29.398006  388892 addons.go:234] Setting addon default-storageclass=true in "newest-cni-474812"
	W0216 18:08:29.398033  388892 addons.go:243] addon default-storageclass should already be in state true
	I0216 18:08:29.398062  388892 host.go:66] Checking if "newest-cni-474812" exists ...
	I0216 18:08:29.398578  388892 cli_runner.go:164] Run: docker container inspect newest-cni-474812 --format={{.State.Status}}
	I0216 18:08:29.425495  388892 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0216 18:08:29.428249  388892 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0216 18:08:29.428268  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0216 18:08:29.428339  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:29.434146  388892 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0216 18:08:29.440828  388892 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0216 18:08:29.443083  388892 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0216 18:08:29.443106  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0216 18:08:29.443189  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:29.476832  388892 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0216 18:08:29.476852  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0216 18:08:29.476918  388892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474812
	I0216 18:08:29.484929  388892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/newest-cni-474812/id_rsa Username:docker}
	I0216 18:08:29.508753  388892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/newest-cni-474812/id_rsa Username:docker}
	I0216 18:08:29.536795  388892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/newest-cni-474812/id_rsa Username:docker}
	I0216 18:08:29.538319  388892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/newest-cni-474812/id_rsa Username:docker}
	I0216 18:08:29.798995  388892 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0216 18:08:29.799083  388892 api_server.go:52] waiting for apiserver process to appear ...
	I0216 18:08:29.799164  388892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 18:08:29.817121  388892 api_server.go:72] duration metric: took 466.342266ms to wait for apiserver process to appear ...
	I0216 18:08:29.817150  388892 api_server.go:88] waiting for apiserver healthz status ...
	I0216 18:08:29.817178  388892 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0216 18:08:29.832855  388892 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0216 18:08:29.836754  388892 api_server.go:141] control plane version: v1.29.0-rc.2
	I0216 18:08:29.836781  388892 api_server.go:131] duration metric: took 19.623739ms to wait for apiserver health ...
	I0216 18:08:29.836818  388892 system_pods.go:43] waiting for kube-system pods to appear ...
	I0216 18:08:29.847085  388892 system_pods.go:59] 8 kube-system pods found
	I0216 18:08:29.847125  388892 system_pods.go:61] "coredns-76f75df574-6brvg" [ec775320-4a0f-4371-a281-6ae1c5f11672] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0216 18:08:29.847138  388892 system_pods.go:61] "etcd-newest-cni-474812" [79f97b3e-5e0f-4771-ae3c-a0a0c91c8fb0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0216 18:08:29.847169  388892 system_pods.go:61] "kube-apiserver-newest-cni-474812" [cc08aac9-970a-4636-839e-4507caa4ac60] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0216 18:08:29.847185  388892 system_pods.go:61] "kube-controller-manager-newest-cni-474812" [19716706-5959-4463-8a68-97c39d9fcfe9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0216 18:08:29.847213  388892 system_pods.go:61] "kube-proxy-bnhjb" [bea1e721-6b2e-421a-babb-d106a0671922] Running
	I0216 18:08:29.847231  388892 system_pods.go:61] "kube-scheduler-newest-cni-474812" [559c2a84-916d-4f2d-9d15-de8a2df03031] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0216 18:08:29.847246  388892 system_pods.go:61] "metrics-server-57f55c9bc5-t4nxc" [fcf93eef-5046-4db0-9793-77631698618e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0216 18:08:29.847253  388892 system_pods.go:61] "storage-provisioner" [c2420002-7da3-4a09-9256-f0ae26a74655] Running
	I0216 18:08:29.847278  388892 system_pods.go:74] duration metric: took 10.434685ms to wait for pod list to return data ...
	I0216 18:08:29.847294  388892 default_sa.go:34] waiting for default service account to be created ...
	I0216 18:08:29.850526  388892 default_sa.go:45] found service account: "default"
	I0216 18:08:29.850555  388892 default_sa.go:55] duration metric: took 3.254084ms for default service account to be created ...
	I0216 18:08:29.850567  388892 kubeadm.go:581] duration metric: took 499.795231ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0216 18:08:29.850612  388892 node_conditions.go:102] verifying NodePressure condition ...
	I0216 18:08:29.860058  388892 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0216 18:08:29.860091  388892 node_conditions.go:123] node cpu capacity is 2
	I0216 18:08:29.860101  388892 node_conditions.go:105] duration metric: took 9.483809ms to run NodePressure ...
	I0216 18:08:29.860129  388892 start.go:228] waiting for startup goroutines ...
	I0216 18:08:29.869419  388892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0216 18:08:29.878484  388892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0216 18:08:29.936578  388892 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0216 18:08:29.936612  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0216 18:08:29.955859  388892 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0216 18:08:29.955886  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0216 18:08:30.077464  388892 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0216 18:08:30.077503  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0216 18:08:30.085041  388892 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0216 18:08:30.085078  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0216 18:08:30.230322  388892 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 18:08:30.230353  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0216 18:08:30.234773  388892 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0216 18:08:30.234798  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0216 18:08:30.436866  388892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0216 18:08:30.458327  388892 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0216 18:08:30.458357  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0216 18:08:30.584663  388892 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0216 18:08:30.584688  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0216 18:08:30.747782  388892 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0216 18:08:30.747812  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0216 18:08:30.874821  388892 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0216 18:08:30.874848  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0216 18:08:30.998085  388892 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0216 18:08:30.998111  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0216 18:08:31.110967  388892 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 18:08:31.110995  388892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0216 18:08:31.188630  388892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0216 18:08:32.420517  388892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.551059462s)
	I0216 18:08:32.420618  388892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.542087591s)
	I0216 18:08:32.591556  388892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.15464785s)
	I0216 18:08:32.591600  388892 addons.go:470] Verifying addon metrics-server=true in "newest-cni-474812"
	I0216 18:08:32.866267  388892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.677551266s)
	I0216 18:08:32.868357  388892 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-474812 addons enable metrics-server
	
	I0216 18:08:32.870595  388892 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0216 18:08:32.872722  388892 addons.go:505] enable addons completed in 3.560289131s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0216 18:08:32.872770  388892 start.go:233] waiting for cluster config update ...
	I0216 18:08:32.872785  388892 start.go:242] writing updated cluster config ...
	I0216 18:08:32.873070  388892 ssh_runner.go:195] Run: rm -f paused
	I0216 18:08:32.926558  388892 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0216 18:08:32.928550  388892 out.go:177] * Done! kubectl is now configured to use "newest-cni-474812" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 16 17:54:21 old-k8s-version-488384 systemd[1]: Stopping Docker Application Container Engine...
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[810]: time="2024-02-16T17:54:21.022139788Z" level=info msg="Processing signal 'terminated'"
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[810]: time="2024-02-16T17:54:21.023967258Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[810]: time="2024-02-16T17:54:21.025177131Z" level=info msg="Daemon shutdown complete"
	Feb 16 17:54:21 old-k8s-version-488384 systemd[1]: docker.service: Deactivated successfully.
	Feb 16 17:54:21 old-k8s-version-488384 systemd[1]: Stopped Docker Application Container Engine.
	Feb 16 17:54:21 old-k8s-version-488384 systemd[1]: Starting Docker Application Container Engine...
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:21.091908374Z" level=info msg="Starting up"
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:21.112216660Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 16 17:54:21 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:21.993072535Z" level=info msg="Loading containers: start."
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.104735415Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.139242043Z" level=info msg="Loading containers: done."
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.150758417Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.150982673Z" level=info msg="Daemon has completed initialization"
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.178457118Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 16 17:54:22 old-k8s-version-488384 systemd[1]: Started Docker Application Container Engine.
	Feb 16 17:54:22 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:54:22.179135039Z" level=info msg="API listen on [::]:2376"
	Feb 16 17:58:43 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:58:43.813830352Z" level=info msg="ignoring event" container=5398ff3d2bdac036977aa16f6311ebd247a65deac8ff004d4b5ac20165adee5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:58:44 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:58:44.153229677Z" level=info msg="ignoring event" container=71fc2aebb1c88db8ea8bc9187cf6390ab6ec21bafa138ba3811cf26f20249b3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:58:44 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:58:44.266528429Z" level=info msg="ignoring event" container=8a6c99aed1c0b1834041c6a6bc1c178c6492f49ba85082b082db9cd21c820220 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 17:58:44 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T17:58:44.367700373Z" level=info msg="ignoring event" container=afdc289f5646d486711734d8f39eca3b87f3ecbdc9e9580dab74789f73becb83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 18:02:50 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T18:02:50.760970045Z" level=info msg="ignoring event" container=b2844fac18da06fd1c4486f4f70f77eb837917110f86429305aafab762d85004 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 18:02:50 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T18:02:50.852786686Z" level=info msg="ignoring event" container=6ef48fefbac89e361de0ab2058c5ae613d94843a40528f867feac7935f6f703e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 18:02:50 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T18:02:50.935585907Z" level=info msg="ignoring event" container=77ecde8670d9b6223c89fe2a85458af3a411f08224b994bcee4510fa265d0245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 16 18:02:51 old-k8s-version-488384 dockerd[1015]: time="2024-02-16T18:02:51.021288966Z" level=info msg="ignoring event" container=8db807559780922fed33221f1377b2278e84a7bf519d6d793caf430b38794084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000736] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=00000000ea93a584{9p.inode} n=00000000ddff12b8
	[  +0.001060] FS-Cache: O-key=[8] '0461f10000000000'
	[  +0.000753] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000ea93a584{9p.inode} n=0000000015591770
	[  +0.001047] FS-Cache: N-key=[8] '0461f10000000000'
	[Feb16 16:51] FS-Cache: Duplicate cookie detected
	[  +0.000747] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000957] FS-Cache: O-cookie d=00000000ea93a584{9p.inode} n=000000006efb19ee
	[  +0.001084] FS-Cache: O-key=[8] '0361f10000000000'
	[  +0.000809] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001034] FS-Cache: N-cookie d=00000000ea93a584{9p.inode} n=00000000b472c289
	[  +0.001072] FS-Cache: N-key=[8] '0361f10000000000'
	[  +0.382339] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001082] FS-Cache: O-cookie d=00000000ea93a584{9p.inode} n=00000000f3dd8454
	[  +0.001083] FS-Cache: O-key=[8] '0661f10000000000'
	[  +0.000812] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000963] FS-Cache: N-cookie d=00000000ea93a584{9p.inode} n=0000000032d8be23
	[  +0.001050] FS-Cache: N-key=[8] '0661f10000000000'
	[Feb16 16:53] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Feb16 17:33] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.010301] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.007673] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.156648] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> kernel <==
	 18:13:47 up  1:56,  0 users,  load average: 0.37, 0.65, 1.26
	Linux old-k8s-version-488384 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 16 18:13:45 old-k8s-version-488384 kubelet[10015]: E0216 18:13:45.370047   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:45 old-k8s-version-488384 kubelet[10015]: E0216 18:13:45.470198   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:45 old-k8s-version-488384 kubelet[10015]: E0216 18:13:45.511393   10015 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-488384&limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:13:45 old-k8s-version-488384 kubelet[10015]: E0216 18:13:45.570306   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:45 old-k8s-version-488384 kubelet[10015]: E0216 18:13:45.670519   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:45 old-k8s-version-488384 kubelet[10015]: E0216 18:13:45.712003   10015 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:13:45 old-k8s-version-488384 kubelet[10015]: E0216 18:13:45.770848   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:45 old-k8s-version-488384 kubelet[10015]: E0216 18:13:45.870945   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:45 old-k8s-version-488384 kubelet[10015]: E0216 18:13:45.911985   10015 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:13:45 old-k8s-version-488384 kubelet[10015]: E0216 18:13:45.971687   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.071844   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.111909   10015 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.171973   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.272139   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.312009   10015 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-488384&limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.372269   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.472421   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.512045   10015 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-488384&limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.572532   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.672729   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.712653   10015 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.772894   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.873012   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.912753   10015 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 16 18:13:46 old-k8s-version-488384 kubelet[10015]: E0216 18:13:46.973185   10015 kubelet.go:2267] node "old-k8s-version-488384" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-488384 -n old-k8s-version-488384
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-488384 -n old-k8s-version-488384: exit status 2 (336.960342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-488384" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (410.25s)

                                                
                                    

Test pass (294/330)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.75
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.2
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 12.85
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 16.97
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.2
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
31 TestOffline 96.21
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.11
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.11
36 TestAddons/Setup 148.89
38 TestAddons/parallel/Registry 14.64
40 TestAddons/parallel/InspektorGadget 10.75
41 TestAddons/parallel/MetricsServer 5.81
44 TestAddons/parallel/CSI 60.57
45 TestAddons/parallel/Headlamp 13.41
46 TestAddons/parallel/CloudSpanner 6.58
47 TestAddons/parallel/LocalPath 53.39
48 TestAddons/parallel/NvidiaDevicePlugin 5.57
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 11.11
54 TestCertOptions 38.23
55 TestCertExpiration 246.63
56 TestDockerFlags 43.75
57 TestForceSystemdFlag 43.9
58 TestForceSystemdEnv 45.81
64 TestErrorSpam/setup 33.34
65 TestErrorSpam/start 0.8
66 TestErrorSpam/status 1.01
67 TestErrorSpam/pause 1.37
68 TestErrorSpam/unpause 1.44
69 TestErrorSpam/stop 2.04
72 TestFunctional/serial/CopySyncFile 0.01
73 TestFunctional/serial/StartWithProxy 44.59
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 37.71
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.1
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.82
81 TestFunctional/serial/CacheCmd/cache/add_local 0.94
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.14
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
89 TestFunctional/serial/ExtraConfig 39.75
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.22
92 TestFunctional/serial/LogsFileCmd 1.22
93 TestFunctional/serial/InvalidService 4.46
95 TestFunctional/parallel/ConfigCmd 0.58
96 TestFunctional/parallel/DashboardCmd 11.18
97 TestFunctional/parallel/DryRun 0.51
98 TestFunctional/parallel/InternationalLanguage 0.25
99 TestFunctional/parallel/StatusCmd 1.3
103 TestFunctional/parallel/ServiceCmdConnect 11.68
104 TestFunctional/parallel/AddonsCmd 0.21
105 TestFunctional/parallel/PersistentVolumeClaim 28.76
107 TestFunctional/parallel/SSHCmd 0.82
108 TestFunctional/parallel/CpCmd 2.72
110 TestFunctional/parallel/FileSync 0.34
111 TestFunctional/parallel/CertSync 2.22
115 TestFunctional/parallel/NodeLabels 0.1
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
119 TestFunctional/parallel/License 0.3
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.49
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
133 TestFunctional/parallel/ProfileCmd/profile_list 0.48
134 TestFunctional/parallel/ServiceCmd/List 0.63
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
137 TestFunctional/parallel/MountCmd/any-port 7.93
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
139 TestFunctional/parallel/ServiceCmd/Format 0.41
140 TestFunctional/parallel/ServiceCmd/URL 0.56
141 TestFunctional/parallel/MountCmd/specific-port 1.54
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
143 TestFunctional/parallel/Version/short 0.2
144 TestFunctional/parallel/Version/components 1.09
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.7
150 TestFunctional/parallel/ImageCommands/Setup 1.84
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.21
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
155 TestFunctional/parallel/DockerEnv/bash 1.31
156 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.2
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.07
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.89
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.33
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.01
162 TestFunctional/delete_addon-resizer_images 0.08
163 TestFunctional/delete_my-image_image 0.03
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestImageBuild/serial/Setup 34.45
169 TestImageBuild/serial/NormalBuild 1.78
170 TestImageBuild/serial/BuildWithBuildArg 0.91
171 TestImageBuild/serial/BuildWithDockerIgnore 0.74
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.7
180 TestJSONOutput/start/Command 81.73
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.59
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.52
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 10.85
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.24
205 TestKicCustomNetwork/create_custom_network 33.05
206 TestKicCustomNetwork/use_default_bridge_network 31.84
207 TestKicExistingNetwork 33.33
208 TestKicCustomSubnet 31.51
209 TestKicStaticIP 32.63
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 68.6
214 TestMountStart/serial/StartWithMountFirst 8.01
215 TestMountStart/serial/VerifyMountFirst 0.27
216 TestMountStart/serial/StartWithMountSecond 8.21
217 TestMountStart/serial/VerifyMountSecond 0.27
218 TestMountStart/serial/DeleteFirst 1.49
219 TestMountStart/serial/VerifyMountPostDelete 0.28
220 TestMountStart/serial/Stop 1.21
221 TestMountStart/serial/RestartStopped 8.14
222 TestMountStart/serial/VerifyMountPostStop 0.27
225 TestMultiNode/serial/FreshStart2Nodes 78.18
226 TestMultiNode/serial/DeployApp2Nodes 40.28
227 TestMultiNode/serial/PingHostFrom2Pods 1.07
228 TestMultiNode/serial/AddNode 18.28
229 TestMultiNode/serial/MultiNodeLabels 0.11
230 TestMultiNode/serial/ProfileList 0.33
231 TestMultiNode/serial/CopyFile 10.58
232 TestMultiNode/serial/StopNode 2.32
233 TestMultiNode/serial/StartAfterStop 13.85
234 TestMultiNode/serial/RestartKeepsNodes 122.61
235 TestMultiNode/serial/DeleteNode 5.06
236 TestMultiNode/serial/StopMultiNode 21.59
237 TestMultiNode/serial/RestartMultiNode 88.47
238 TestMultiNode/serial/ValidateNameConflict 38.9
243 TestPreload 176.2
245 TestScheduledStopUnix 106.22
246 TestSkaffold 119.58
248 TestInsufficientStorage 10.8
249 TestRunningBinaryUpgrade 71.84
252 TestMissingContainerUpgrade 142.09
254 TestPause/serial/Start 95.46
255 TestPause/serial/SecondStartNoReconfiguration 42.63
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
258 TestNoKubernetes/serial/StartWithK8s 39.3
259 TestNoKubernetes/serial/StartWithStopK8s 16.72
260 TestPause/serial/Pause 0.73
261 TestPause/serial/VerifyStatus 0.32
262 TestPause/serial/Unpause 0.59
263 TestPause/serial/PauseAgain 0.8
264 TestPause/serial/DeletePaused 2.05
265 TestPause/serial/VerifyDeletedResources 14.4
266 TestNoKubernetes/serial/Start 7.59
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
279 TestNoKubernetes/serial/ProfileList 0.79
280 TestNoKubernetes/serial/Stop 1.26
281 TestNoKubernetes/serial/StartNoArgs 8.73
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
283 TestStoppedBinaryUpgrade/Setup 2.76
284 TestStoppedBinaryUpgrade/Upgrade 73.77
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
293 TestNetworkPlugins/group/auto/Start 85.2
294 TestNetworkPlugins/group/auto/KubeletFlags 0.44
295 TestNetworkPlugins/group/auto/NetCatPod 10.4
296 TestNetworkPlugins/group/auto/DNS 0.25
297 TestNetworkPlugins/group/auto/Localhost 0.17
298 TestNetworkPlugins/group/auto/HairPin 0.17
299 TestNetworkPlugins/group/kindnet/Start 72.78
300 TestNetworkPlugins/group/calico/Start 80.67
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.39
304 TestNetworkPlugins/group/calico/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/DNS 0.21
306 TestNetworkPlugins/group/kindnet/Localhost 0.17
307 TestNetworkPlugins/group/kindnet/HairPin 0.18
308 TestNetworkPlugins/group/calico/KubeletFlags 0.29
309 TestNetworkPlugins/group/calico/NetCatPod 11.28
310 TestNetworkPlugins/group/calico/DNS 0.29
311 TestNetworkPlugins/group/calico/Localhost 0.35
312 TestNetworkPlugins/group/calico/HairPin 0.3
313 TestNetworkPlugins/group/custom-flannel/Start 68.1
314 TestNetworkPlugins/group/false/Start 93.17
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
317 TestNetworkPlugins/group/custom-flannel/DNS 0.2
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
320 TestNetworkPlugins/group/enable-default-cni/Start 56.64
321 TestNetworkPlugins/group/false/KubeletFlags 0.38
322 TestNetworkPlugins/group/false/NetCatPod 11.31
323 TestNetworkPlugins/group/false/DNS 0.32
324 TestNetworkPlugins/group/false/Localhost 0.25
325 TestNetworkPlugins/group/false/HairPin 0.2
326 TestNetworkPlugins/group/flannel/Start 68.12
327 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
328 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
329 TestNetworkPlugins/group/enable-default-cni/DNS 0.33
330 TestNetworkPlugins/group/enable-default-cni/Localhost 0.3
331 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
332 TestNetworkPlugins/group/bridge/Start 51.31
333 TestNetworkPlugins/group/flannel/ControllerPod 6.01
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
335 TestNetworkPlugins/group/flannel/NetCatPod 10.39
336 TestNetworkPlugins/group/flannel/DNS 0.2
337 TestNetworkPlugins/group/flannel/Localhost 0.17
338 TestNetworkPlugins/group/flannel/HairPin 0.21
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
340 TestNetworkPlugins/group/bridge/NetCatPod 11.4
341 TestNetworkPlugins/group/bridge/DNS 0.3
342 TestNetworkPlugins/group/bridge/Localhost 0.17
343 TestNetworkPlugins/group/bridge/HairPin 0.16
344 TestNetworkPlugins/group/kubenet/Start 92.59
347 TestNetworkPlugins/group/kubenet/KubeletFlags 0.32
348 TestNetworkPlugins/group/kubenet/NetCatPod 10.29
349 TestNetworkPlugins/group/kubenet/DNS 0.19
350 TestNetworkPlugins/group/kubenet/Localhost 0.17
351 TestNetworkPlugins/group/kubenet/HairPin 0.18
353 TestStartStop/group/no-preload/serial/FirstStart 54.99
354 TestStartStop/group/no-preload/serial/DeployApp 8.34
355 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
356 TestStartStop/group/no-preload/serial/Stop 10.78
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/no-preload/serial/SecondStart 314.94
359 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
360 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
361 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
362 TestStartStop/group/no-preload/serial/Pause 2.86
364 TestStartStop/group/embed-certs/serial/FirstStart 45.62
367 TestStartStop/group/embed-certs/serial/DeployApp 9.31
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
369 TestStartStop/group/embed-certs/serial/Stop 10.9
370 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
371 TestStartStop/group/embed-certs/serial/SecondStart 316.44
372 TestStartStop/group/old-k8s-version/serial/Stop 1.24
373 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
378 TestStartStop/group/embed-certs/serial/Pause 2.82
380 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.07
381 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
383 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.9
384 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
385 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 344.01
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 17
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.18
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
390 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.06
392 TestStartStop/group/newest-cni/serial/FirstStart 46.01
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
395 TestStartStop/group/newest-cni/serial/Stop 9.13
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
397 TestStartStop/group/newest-cni/serial/SecondStart 33.88
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
401 TestStartStop/group/newest-cni/serial/Pause 2.78
x
+
TestDownloadOnly/v1.16.0/json-events (18.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-233338 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-233338 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (18.747323801s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (18.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-233338
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-233338: exit status 85 (77.887458ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-233338 | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC |          |
	|         | -p download-only-233338        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 16:41:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 16:41:29.924495    7518 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:41:29.924676    7518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:41:29.924687    7518 out.go:304] Setting ErrFile to fd 2...
	I0216 16:41:29.924693    7518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:41:29.925466    7518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	W0216 16:41:29.925626    7518 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17936-2208/.minikube/config/config.json: open /home/jenkins/minikube-integration/17936-2208/.minikube/config/config.json: no such file or directory
	I0216 16:41:29.926064    7518 out.go:298] Setting JSON to true
	I0216 16:41:29.926833    7518 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1440,"bootTime":1708100250,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 16:41:29.926901    7518 start.go:139] virtualization:  
	I0216 16:41:29.929849    7518 out.go:97] [download-only-233338] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	W0216 16:41:29.930029    7518 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball: no such file or directory
	I0216 16:41:29.932023    7518 out.go:169] MINIKUBE_LOCATION=17936
	I0216 16:41:29.930133    7518 notify.go:220] Checking for updates...
	I0216 16:41:29.935590    7518 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:41:29.937547    7518 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 16:41:29.939461    7518 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 16:41:29.941277    7518 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0216 16:41:29.944687    7518 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0216 16:41:29.944951    7518 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:41:29.965536    7518 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:41:29.965638    7518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:41:30.309847    7518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 16:41:30.299744481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:41:30.309949    7518 docker.go:295] overlay module found
	I0216 16:41:30.311940    7518 out.go:97] Using the docker driver based on user configuration
	I0216 16:41:30.311968    7518 start.go:299] selected driver: docker
	I0216 16:41:30.311977    7518 start.go:903] validating driver "docker" against <nil>
	I0216 16:41:30.312070    7518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:41:30.370333    7518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-16 16:41:30.36156096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:41:30.370503    7518 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 16:41:30.370788    7518 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0216 16:41:30.370957    7518 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 16:41:30.372996    7518 out.go:169] Using Docker driver with root privileges
	I0216 16:41:30.375040    7518 cni.go:84] Creating CNI manager for ""
	I0216 16:41:30.375067    7518 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0216 16:41:30.375084    7518 start_flags.go:323] config:
	{Name:download-only-233338 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-233338 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:41:30.377195    7518 out.go:97] Starting control plane node download-only-233338 in cluster download-only-233338
	I0216 16:41:30.377217    7518 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 16:41:30.379104    7518 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0216 16:41:30.379128    7518 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 16:41:30.379274    7518 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 16:41:30.393813    7518 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 16:41:30.393977    7518 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0216 16:41:30.394090    7518 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 16:41:30.450993    7518 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0216 16:41:30.451020    7518 cache.go:56] Caching tarball of preloaded images
	I0216 16:41:30.451224    7518 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0216 16:41:30.454041    7518 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0216 16:41:30.454061    7518 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0216 16:41:30.559410    7518 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0216 16:41:44.660733    7518 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0216 16:41:44.660857    7518 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-233338"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-233338
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (12.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-806228 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-806228 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.848744353s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (12.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-806228
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-806228: exit status 85 (76.59997ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-233338 | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC |                     |
	|         | -p download-only-233338        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC | 16 Feb 24 16:41 UTC |
	| delete  | -p download-only-233338        | download-only-233338 | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC | 16 Feb 24 16:41 UTC |
	| start   | -o=json --download-only        | download-only-806228 | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC |                     |
	|         | -p download-only-806228        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 16:41:49
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 16:41:49.085294    7681 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:41:49.085456    7681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:41:49.085467    7681 out.go:304] Setting ErrFile to fd 2...
	I0216 16:41:49.085473    7681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:41:49.085717    7681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 16:41:49.086096    7681 out.go:298] Setting JSON to true
	I0216 16:41:49.086805    7681 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1459,"bootTime":1708100250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 16:41:49.086868    7681 start.go:139] virtualization:  
	I0216 16:41:49.089295    7681 out.go:97] [download-only-806228] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 16:41:49.091369    7681 out.go:169] MINIKUBE_LOCATION=17936
	I0216 16:41:49.089851    7681 notify.go:220] Checking for updates...
	I0216 16:41:49.095358    7681 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:41:49.097449    7681 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 16:41:49.099230    7681 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 16:41:49.101147    7681 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0216 16:41:49.106087    7681 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0216 16:41:49.106401    7681 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:41:49.125759    7681 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:41:49.125865    7681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:41:49.194497    7681 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-16 16:41:49.185507667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:41:49.194607    7681 docker.go:295] overlay module found
	I0216 16:41:49.196765    7681 out.go:97] Using the docker driver based on user configuration
	I0216 16:41:49.196789    7681 start.go:299] selected driver: docker
	I0216 16:41:49.196796    7681 start.go:903] validating driver "docker" against <nil>
	I0216 16:41:49.196896    7681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:41:49.259420    7681 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-16 16:41:49.251061551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:41:49.259579    7681 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 16:41:49.259862    7681 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0216 16:41:49.260014    7681 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 16:41:49.262152    7681 out.go:169] Using Docker driver with root privileges
	I0216 16:41:49.264029    7681 cni.go:84] Creating CNI manager for ""
	I0216 16:41:49.264052    7681 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 16:41:49.264064    7681 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0216 16:41:49.264078    7681 start_flags.go:323] config:
	{Name:download-only-806228 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-806228 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:41:49.266334    7681 out.go:97] Starting control plane node download-only-806228 in cluster download-only-806228
	I0216 16:41:49.266356    7681 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 16:41:49.268257    7681 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0216 16:41:49.268280    7681 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 16:41:49.268377    7681 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 16:41:49.285452    7681 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 16:41:49.285587    7681 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0216 16:41:49.285615    7681 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0216 16:41:49.285625    7681 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0216 16:41:49.285633    7681 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0216 16:41:49.347782    7681 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0216 16:41:49.347805    7681 cache.go:56] Caching tarball of preloaded images
	I0216 16:41:49.347962    7681 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0216 16:41:49.349999    7681 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0216 16:41:49.350028    7681 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0216 16:41:49.469776    7681 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-806228"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-806228
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (16.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-323790 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-323790 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (16.974454075s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (16.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-323790
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-323790: exit status 85 (74.42366ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-233338 | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC |                     |
	|         | -p download-only-233338           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC | 16 Feb 24 16:41 UTC |
	| delete  | -p download-only-233338           | download-only-233338 | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC | 16 Feb 24 16:41 UTC |
	| start   | -o=json --download-only           | download-only-806228 | jenkins | v1.32.0 | 16 Feb 24 16:41 UTC |                     |
	|         | -p download-only-806228           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| delete  | -p download-only-806228           | download-only-806228 | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC | 16 Feb 24 16:42 UTC |
	| start   | -o=json --download-only           | download-only-323790 | jenkins | v1.32.0 | 16 Feb 24 16:42 UTC |                     |
	|         | -p download-only-323790           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/16 16:42:02
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0216 16:42:02.358534    7842 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:42:02.358726    7842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:42:02.358751    7842 out.go:304] Setting ErrFile to fd 2...
	I0216 16:42:02.358772    7842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:42:02.359074    7842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 16:42:02.359521    7842 out.go:298] Setting JSON to true
	I0216 16:42:02.360285    7842 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1472,"bootTime":1708100250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 16:42:02.360379    7842 start.go:139] virtualization:  
	I0216 16:42:02.363216    7842 out.go:97] [download-only-323790] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 16:42:02.365630    7842 out.go:169] MINIKUBE_LOCATION=17936
	I0216 16:42:02.363413    7842 notify.go:220] Checking for updates...
	I0216 16:42:02.370185    7842 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:42:02.372452    7842 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 16:42:02.374358    7842 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 16:42:02.376270    7842 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0216 16:42:02.379973    7842 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0216 16:42:02.380219    7842 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:42:02.399403    7842 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:42:02.399487    7842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:42:02.464283    7842 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-16 16:42:02.454846841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:42:02.464382    7842 docker.go:295] overlay module found
	I0216 16:42:02.466961    7842 out.go:97] Using the docker driver based on user configuration
	I0216 16:42:02.466990    7842 start.go:299] selected driver: docker
	I0216 16:42:02.466997    7842 start.go:903] validating driver "docker" against <nil>
	I0216 16:42:02.467102    7842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:42:02.524846    7842 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-16 16:42:02.516539325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:42:02.525017    7842 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0216 16:42:02.525290    7842 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0216 16:42:02.525443    7842 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0216 16:42:02.527791    7842 out.go:169] Using Docker driver with root privileges
	I0216 16:42:02.529889    7842 cni.go:84] Creating CNI manager for ""
	I0216 16:42:02.529913    7842 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0216 16:42:02.529926    7842 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0216 16:42:02.529937    7842 start_flags.go:323] config:
	{Name:download-only-323790 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-323790 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:42:02.532131    7842 out.go:97] Starting control plane node download-only-323790 in cluster download-only-323790
	I0216 16:42:02.532149    7842 cache.go:121] Beginning downloading kic base image for docker with docker
	I0216 16:42:02.533949    7842 out.go:97] Pulling base image v0.0.42-1708008208-17936 ...
	I0216 16:42:02.533972    7842 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 16:42:02.534077    7842 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
	I0216 16:42:02.548195    7842 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf to local cache
	I0216 16:42:02.548315    7842 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory
	I0216 16:42:02.548333    7842 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local cache directory, skipping pull
	I0216 16:42:02.548338    7842 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in cache, skipping pull
	I0216 16:42:02.548345    7842 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf as a tarball
	I0216 16:42:02.593891    7842 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0216 16:42:02.593914    7842 cache.go:56] Caching tarball of preloaded images
	I0216 16:42:02.594054    7842 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0216 16:42:02.596206    7842 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0216 16:42:02.596224    7842 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0216 16:42:02.791143    7842 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /home/jenkins/minikube-integration/17936-2208/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-323790"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-323790
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-025721 --alsologtostderr --binary-mirror http://127.0.0.1:46499 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-025721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-025721
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (96.21s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-530328 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-530328 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m33.933347111s)
helpers_test.go:175: Cleaning up "offline-docker-530328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-530328
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-530328: (2.280486855s)
--- PASS: TestOffline (96.21s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-105162
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-105162: exit status 85 (113.561955ms)

                                                
                                                
-- stdout --
	* Profile "addons-105162" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-105162"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-105162
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-105162: exit status 85 (105.053717ms)

                                                
                                                
-- stdout --
	* Profile "addons-105162" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-105162"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/Setup (148.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-105162 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-105162 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m28.889534175s)
--- PASS: TestAddons/Setup (148.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 40.719717ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-8w4b4" [97a11573-b083-44c5-ae7a-d82fb336cc2d] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006004424s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-27c7c" [6f79633b-2c8d-4708-9f8b-3ce114882530] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00718712s
addons_test.go:340: (dbg) Run:  kubectl --context addons-105162 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-105162 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-105162 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.5986621s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 ip
2024/02/16 16:45:04 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ddc6j" [5f8a5442-4c0f-42c5-b0a5-ad18e89a10c2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005027545s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-105162
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-105162: (5.747150782s)
--- PASS: TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.776069ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-cfmzq" [64d818de-dd93-47b2-958e-c9ceef37374b] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004814282s
addons_test.go:415: (dbg) Run:  kubectl --context addons-105162 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 9.499036ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-105162 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-105162 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [26743420-cd25-4d69-af32-aeac349185e7] Pending
helpers_test.go:344: "task-pv-pod" [26743420-cd25-4d69-af32-aeac349185e7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [26743420-cd25-4d69-af32-aeac349185e7] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004127003s
addons_test.go:584: (dbg) Run:  kubectl --context addons-105162 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-105162 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-105162 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-105162 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-105162 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-105162 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-105162 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4a99eb62-625d-423a-a5b9-b84d89167bcf] Pending
helpers_test.go:344: "task-pv-pod-restore" [4a99eb62-625d-423a-a5b9-b84d89167bcf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4a99eb62-625d-423a-a5b9-b84d89167bcf] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004051071s
addons_test.go:626: (dbg) Run:  kubectl --context addons-105162 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-105162 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-105162 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-105162 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.771397403s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-105162 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-105162 --alsologtostderr -v=1: (1.410177839s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-qgck5" [1df77281-8918-4c5a-9193-d018b3f651f8] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-qgck5" [1df77281-8918-4c5a-9193-d018b3f651f8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-qgck5" [1df77281-8918-4c5a-9193-d018b3f651f8] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003643463s
--- PASS: TestAddons/parallel/Headlamp (13.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7b4754d5d4-npn2j" [266fec08-be84-4a33-afdf-3cf288933183] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003804154s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-105162
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.39s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-105162 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-105162 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-105162 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [860ae4f9-a923-48d9-9364-5a8b3cc82b65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [860ae4f9-a923-48d9-9364-5a8b3cc82b65] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [860ae4f9-a923-48d9-9364-5a8b3cc82b65] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003734598s
addons_test.go:891: (dbg) Run:  kubectl --context addons-105162 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 ssh "cat /opt/local-path-provisioner/pvc-9e738d8f-6f65-49b4-8b38-b2f24abc3b7b_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-105162 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-105162 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-105162 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-105162 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.178812498s)
--- PASS: TestAddons/parallel/LocalPath (53.39s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-b9xb9" [72e6dd2f-3d43-4897-aa2a-ceb463bb124e] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004939435s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-105162
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-gfmnx" [2c49231b-da92-4add-8e38-4861fa91920f] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003747095s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-105162 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-105162 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.11s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-105162
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-105162: (10.82522825s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-105162
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-105162
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-105162
--- PASS: TestAddons/StoppedEnableDisable (11.11s)

                                                
                                    
x
+
TestCertOptions (38.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-762895 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-762895 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (35.483041733s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-762895 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-762895 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-762895 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-762895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-762895
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-762895: (2.069716396s)
--- PASS: TestCertOptions (38.23s)

                                                
                                    
x
+
TestCertExpiration (246.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-192643 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-192643 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (36.994982274s)
E0216 17:27:03.078817    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:03.084068    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:03.094340    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:03.114594    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:03.154849    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:03.235147    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:03.395629    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:03.716540    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:04.356756    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:05.636926    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:08.198379    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:27:13.319005    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-192643 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0216 17:29:46.921820    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:29:50.355130    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-192643 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (27.490413294s)
helpers_test.go:175: Cleaning up "cert-expiration-192643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-192643
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-192643: (2.140442553s)
--- PASS: TestCertExpiration (246.63s)

                                                
                                    
x
+
TestDockerFlags (43.75s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-467828 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-467828 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.988948334s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-467828 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-467828 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-467828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-467828
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-467828: (2.043572449s)
--- PASS: TestDockerFlags (43.75s)

                                                
                                    
x
+
TestForceSystemdFlag (43.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-070584 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0216 17:25:23.334615    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-070584 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.121689491s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-070584 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-070584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-070584
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-070584: (2.407355007s)
--- PASS: TestForceSystemdFlag (43.90s)

                                                
                                    
x
+
TestForceSystemdEnv (45.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-733535 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-733535 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.919819281s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-733535 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-733535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-733535
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-733535: (2.415646858s)
--- PASS: TestForceSystemdEnv (45.81s)

                                                
                                    
x
+
TestErrorSpam/setup (33.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-269140 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-269140 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-269140 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-269140 --driver=docker  --container-runtime=docker: (33.340381287s)
--- PASS: TestErrorSpam/setup (33.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 pause
--- PASS: TestErrorSpam/pause (1.37s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (2.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 stop: (1.838318701s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-269140 --log_dir /tmp/nospam-269140 stop
--- PASS: TestErrorSpam/stop (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17936-2208/.minikube/files/etc/test/nested/copy/7513/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918954 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-918954 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (44.588862263s)
--- PASS: TestFunctional/serial/StartWithProxy (44.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918954 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-918954 --alsologtostderr -v=8: (37.705995032s)
functional_test.go:659: soft start took 37.708722159s for "functional-918954" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-918954 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-918954 cache add registry.k8s.io/pause:3.1: (1.036385897s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-918954 /tmp/TestFunctionalserialCacheCmdcacheadd_local1892454107/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 cache add minikube-local-cache-test:functional-918954
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 cache delete minikube-local-cache-test:functional-918954
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-918954
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918954 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (334.019768ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 kubectl -- --context functional-918954 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-918954 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918954 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0216 16:49:50.356540    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:49:50.364309    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:49:50.374542    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:49:50.394781    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:49:50.435033    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:49:50.515303    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:49:50.675663    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:49:50.996178    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:49:51.637035    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:49:52.917222    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:49:55.477391    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:50:00.597548    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 16:50:10.837783    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-918954 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.754276259s)
functional_test.go:757: restart took 39.75438323s for "functional-918954" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-918954 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-918954 logs: (1.217269082s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 logs --file /tmp/TestFunctionalserialLogsFileCmd4288111033/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-918954 logs --file /tmp/TestFunctionalserialLogsFileCmd4288111033/001/logs.txt: (1.216608391s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-918954 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-918954
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-918954: exit status 115 (610.246727ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30353 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-918954 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918954 config get cpus: exit status 14 (92.918189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918954 config get cpus: exit status 14 (95.749489ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-918954 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-918954 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 43780: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-918954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (199.383504ms)

                                                
                                                
-- stdout --
	* [functional-918954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 16:50:57.467041   43476 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:50:57.467246   43476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:50:57.467272   43476 out.go:304] Setting ErrFile to fd 2...
	I0216 16:50:57.467293   43476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:50:57.467539   43476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 16:50:57.467914   43476 out.go:298] Setting JSON to false
	I0216 16:50:57.468924   43476 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2007,"bootTime":1708100250,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 16:50:57.469019   43476 start.go:139] virtualization:  
	I0216 16:50:57.471876   43476 out.go:177] * [functional-918954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0216 16:50:57.474525   43476 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 16:50:57.476579   43476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:50:57.474599   43476 notify.go:220] Checking for updates...
	I0216 16:50:57.479142   43476 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 16:50:57.481720   43476 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 16:50:57.483511   43476 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0216 16:50:57.485561   43476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 16:50:57.487867   43476 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 16:50:57.488437   43476 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:50:57.510454   43476 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:50:57.510566   43476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:50:57.591043   43476 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-16 16:50:57.581397986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:50:57.591155   43476 docker.go:295] overlay module found
	I0216 16:50:57.593276   43476 out.go:177] * Using the docker driver based on existing profile
	I0216 16:50:57.595208   43476 start.go:299] selected driver: docker
	I0216 16:50:57.595230   43476 start.go:903] validating driver "docker" against &{Name:functional-918954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-918954 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:50:57.595342   43476 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 16:50:57.597734   43476 out.go:177] 
	W0216 16:50:57.599497   43476 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0216 16:50:57.601431   43476 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918954 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-918954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (251.086568ms)

                                                
                                                
-- stdout --
	* [functional-918954] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 16:50:57.239329   43392 out.go:291] Setting OutFile to fd 1 ...
	I0216 16:50:57.240339   43392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:50:57.240352   43392 out.go:304] Setting ErrFile to fd 2...
	I0216 16:50:57.240358   43392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 16:50:57.241310   43392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 16:50:57.241755   43392 out.go:298] Setting JSON to false
	I0216 16:50:57.242676   43392 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2007,"bootTime":1708100250,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0216 16:50:57.242756   43392 start.go:139] virtualization:  
	I0216 16:50:57.246013   43392 out.go:177] * [functional-918954] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0216 16:50:57.247642   43392 out.go:177]   - MINIKUBE_LOCATION=17936
	I0216 16:50:57.250117   43392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0216 16:50:57.247826   43392 notify.go:220] Checking for updates...
	I0216 16:50:57.256069   43392 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	I0216 16:50:57.258391   43392 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	I0216 16:50:57.260872   43392 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0216 16:50:57.263533   43392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0216 16:50:57.266767   43392 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 16:50:57.267329   43392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0216 16:50:57.293741   43392 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0216 16:50:57.293894   43392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 16:50:57.387568   43392 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-16 16:50:57.378670843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 16:50:57.387685   43392 docker.go:295] overlay module found
	I0216 16:50:57.391590   43392 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0216 16:50:57.393410   43392 start.go:299] selected driver: docker
	I0216 16:50:57.393427   43392 start.go:903] validating driver "docker" against &{Name:functional-918954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-918954 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0216 16:50:57.393543   43392 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0216 16:50:57.396060   43392 out.go:177] 
	W0216 16:50:57.398618   43392 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0216 16:50:57.400981   43392 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-918954 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-918954 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-qpmtd" [070e2c72-22aa-4fcf-a9a1-1f57c83df9de] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-qpmtd" [070e2c72-22aa-4fcf-a9a1-1f57c83df9de] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.007110477s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31200
functional_test.go:1671: http://192.168.49.2:31200: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-qpmtd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31200
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d06b24a4-2141-4629-8b70-8d5c08f7665e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006296898s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-918954 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-918954 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-918954 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-918954 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-918954 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a1cfa31c-9cba-4028-81c3-0bd69b123221] Pending
E0216 16:50:31.318467    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [a1cfa31c-9cba-4028-81c3-0bd69b123221] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a1cfa31c-9cba-4028-81c3-0bd69b123221] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004247239s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-918954 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-918954 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-918954 delete -f testdata/storage-provisioner/pod.yaml: (1.151933014s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-918954 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4504474b-8ec7-4859-b8b6-89336c6090c6] Pending
helpers_test.go:344: "sp-pod" [4504474b-8ec7-4859-b8b6-89336c6090c6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4504474b-8ec7-4859-b8b6-89336c6090c6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004589437s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-918954 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh -n functional-918954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 cp functional-918954:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2024420332/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh -n functional-918954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh -n functional-918954 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7513/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "sudo cat /etc/test/nested/copy/7513/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7513.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "sudo cat /etc/ssl/certs/7513.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7513.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "sudo cat /usr/share/ca-certificates/7513.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/75132.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "sudo cat /etc/ssl/certs/75132.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/75132.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "sudo cat /usr/share/ca-certificates/75132.pem"
E0216 16:51:12.279168    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-918954 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918954 ssh "sudo systemctl is-active crio": exit status 1 (359.790635ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-918954 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-918954 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-918954 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-918954 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 41012: os: process already finished
helpers_test.go:502: unable to terminate pid 40855: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-918954 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-918954 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2dd0f47a-dc3b-45b7-a826-3d257c8d51e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2dd0f47a-dc3b-45b7-a826-3d257c8d51e6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004110967s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-918954 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.167.68 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-918954 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-918954 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-918954 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-hj7wc" [b1302942-d72d-4169-9b82-0fd92fc6f464] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-hj7wc" [b1302942-d72d-4169-9b82-0fd92fc6f464] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.007448087s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "402.460534ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "75.872499ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "397.767297ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "68.47918ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 service list -o json
functional_test.go:1490: Took "638.22712ms" to run "out/minikube-linux-arm64 -p functional-918954 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdany-port2486177629/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1708102254102891870" to /tmp/TestFunctionalparallelMountCmdany-port2486177629/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1708102254102891870" to /tmp/TestFunctionalparallelMountCmdany-port2486177629/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1708102254102891870" to /tmp/TestFunctionalparallelMountCmdany-port2486177629/001/test-1708102254102891870
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918954 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (441.785791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 16 16:50 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 16 16:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 16 16:50 test-1708102254102891870
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh cat /mount-9p/test-1708102254102891870
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-918954 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8225e1c4-eacb-456b-88b2-139f442ae37f] Pending
helpers_test.go:344: "busybox-mount" [8225e1c4-eacb-456b-88b2-139f442ae37f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8225e1c4-eacb-456b-88b2-139f442ae37f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8225e1c4-eacb-456b-88b2-139f442ae37f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00427573s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-918954 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdany-port2486177629/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31314
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31314
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdspecific-port2158291867/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdspecific-port2158291867/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918954 ssh "sudo umount -f /mount-9p": exit status 1 (383.224041ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-918954 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdspecific-port2158291867/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2144661614/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2144661614/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2144661614/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-918954 ssh "findmnt -T" /mount1: (1.165413647s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-918954 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2144661614/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2144661614/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2144661614/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 version --short
--- PASS: TestFunctional/parallel/Version/short (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-918954 version -o=json --components: (1.09334777s)
--- PASS: TestFunctional/parallel/Version/components (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918954 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-918954
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-918954
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918954 image ls --format short --alsologtostderr:
I0216 16:51:24.430331   46259 out.go:291] Setting OutFile to fd 1 ...
I0216 16:51:24.430508   46259 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:51:24.430520   46259 out.go:304] Setting ErrFile to fd 2...
I0216 16:51:24.430527   46259 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:51:24.430801   46259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
I0216 16:51:24.431502   46259 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:51:24.431659   46259 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:51:24.432402   46259 cli_runner.go:164] Run: docker container inspect functional-918954 --format={{.State.Status}}
I0216 16:51:24.455159   46259 ssh_runner.go:195] Run: systemctl --version
I0216 16:51:24.455218   46259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918954
I0216 16:51:24.478527   46259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/functional-918954/id_rsa Username:docker}
I0216 16:51:24.577007   46259 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918954 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-918954 | 128db3ee749e1 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/nginx                     | alpine            | be5e6f23a9904 | 43.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/nginx                     | latest            | 760b7cbba31e1 | 192MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/google-containers/addon-resizer      | functional-918954 | ffd4cfbbe753e | 32.9MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918954 image ls --format table --alsologtostderr:
I0216 16:51:25.029969   46382 out.go:291] Setting OutFile to fd 1 ...
I0216 16:51:25.030467   46382 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:51:25.030486   46382 out.go:304] Setting ErrFile to fd 2...
I0216 16:51:25.030493   46382 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:51:25.030961   46382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
I0216 16:51:25.031907   46382 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:51:25.032064   46382 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:51:25.032666   46382 cli_runner.go:164] Run: docker container inspect functional-918954 --format={{.State.Status}}
I0216 16:51:25.057873   46382 ssh_runner.go:195] Run: systemctl --version
I0216 16:51:25.057926   46382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918954
I0216 16:51:25.075239   46382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/functional-918954/id_rsa Username:docker}
I0216 16:51:25.179081   46382 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918954 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bd
efdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"128db3ee749e19f5a667d9d7827b7b751732088b41cb2638bbd60bc75607dbb7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-918954"],"size":"30"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDiges
ts":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43600000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-918954"],"size":"32900000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-
glibc"],"size":"3550000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918954 image ls --format json --alsologtostderr:
I0216 16:51:24.730754   46312 out.go:291] Setting OutFile to fd 1 ...
I0216 16:51:24.731974   46312 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:51:24.732022   46312 out.go:304] Setting ErrFile to fd 2...
I0216 16:51:24.732043   46312 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:51:24.732344   46312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
I0216 16:51:24.733099   46312 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:51:24.733278   46312 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:51:24.733805   46312 cli_runner.go:164] Run: docker container inspect functional-918954 --format={{.State.Status}}
I0216 16:51:24.773201   46312 ssh_runner.go:195] Run: systemctl --version
I0216 16:51:24.773253   46312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918954
I0216 16:51:24.803441   46312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/functional-918954/id_rsa Username:docker}
I0216 16:51:24.901081   46312 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918954 image ls --format yaml --alsologtostderr:
- id: 128db3ee749e19f5a667d9d7827b7b751732088b41cb2638bbd60bc75607dbb7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-918954
size: "30"
- id: be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43600000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-918954
size: "32900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918954 image ls --format yaml --alsologtostderr:
I0216 16:51:24.433291   46255 out.go:291] Setting OutFile to fd 1 ...
I0216 16:51:24.433465   46255 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:51:24.433477   46255 out.go:304] Setting ErrFile to fd 2...
I0216 16:51:24.433484   46255 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:51:24.433773   46255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
I0216 16:51:24.434440   46255 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:51:24.434602   46255 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:51:24.435164   46255 cli_runner.go:164] Run: docker container inspect functional-918954 --format={{.State.Status}}
I0216 16:51:24.459860   46255 ssh_runner.go:195] Run: systemctl --version
I0216 16:51:24.459926   46255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918954
I0216 16:51:24.493150   46255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/functional-918954/id_rsa Username:docker}
I0216 16:51:24.599029   46255 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918954 ssh pgrep buildkitd: exit status 1 (339.828852ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image build -t localhost/my-image:functional-918954 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-918954 image build -t localhost/my-image:functional-918954 testdata/build --alsologtostderr: (2.123954223s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918954 image build -t localhost/my-image:functional-918954 testdata/build --alsologtostderr:
I0216 16:51:25.023294   46388 out.go:291] Setting OutFile to fd 1 ...
I0216 16:51:25.023615   46388 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:51:25.023629   46388 out.go:304] Setting ErrFile to fd 2...
I0216 16:51:25.023635   46388 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:51:25.023964   46388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
I0216 16:51:25.024704   46388 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:51:25.026621   46388 config.go:182] Loaded profile config "functional-918954": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0216 16:51:25.027329   46388 cli_runner.go:164] Run: docker container inspect functional-918954 --format={{.State.Status}}
I0216 16:51:25.054541   46388 ssh_runner.go:195] Run: systemctl --version
I0216 16:51:25.054593   46388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918954
I0216 16:51:25.080865   46388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/functional-918954/id_rsa Username:docker}
I0216 16:51:25.178153   46388 build_images.go:151] Building image from path: /tmp/build.2841187451.tar
I0216 16:51:25.178303   46388 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0216 16:51:25.189811   46388 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2841187451.tar
I0216 16:51:25.195202   46388 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2841187451.tar: stat -c "%s %y" /var/lib/minikube/build/build.2841187451.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2841187451.tar': No such file or directory
I0216 16:51:25.195233   46388 ssh_runner.go:362] scp /tmp/build.2841187451.tar --> /var/lib/minikube/build/build.2841187451.tar (3072 bytes)
I0216 16:51:25.235840   46388 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2841187451
I0216 16:51:25.244623   46388 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2841187451 -xf /var/lib/minikube/build/build.2841187451.tar
I0216 16:51:25.254267   46388 docker.go:360] Building image: /var/lib/minikube/build/build.2841187451
I0216 16:51:25.254365   46388 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-918954 /var/lib/minikube/build/build.2841187451
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:3afc382f17e641035d2762c6deaa7949cd58ea0168e68b60f9a875d409c336ce done
#8 naming to localhost/my-image:functional-918954 done
#8 DONE 0.0s
I0216 16:51:27.043368   46388 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-918954 /var/lib/minikube/build/build.2841187451: (1.788976565s)
I0216 16:51:27.043445   46388 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2841187451
I0216 16:51:27.052997   46388 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2841187451.tar
I0216 16:51:27.063094   46388 build_images.go:207] Built localhost/my-image:functional-918954 from /tmp/build.2841187451.tar
I0216 16:51:27.063170   46388 build_images.go:123] succeeded building to: functional-918954
I0216 16:51:27.063183   46388 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.805275834s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-918954
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image load --daemon gcr.io/google-containers/addon-resizer:functional-918954 --alsologtostderr
2024/02/16 16:51:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-918954 image load --daemon gcr.io/google-containers/addon-resizer:functional-918954 --alsologtostderr: (3.942157285s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-918954 docker-env) && out/minikube-linux-arm64 status -p functional-918954"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-918954 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image load --daemon gcr.io/google-containers/addon-resizer:functional-918954 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-918954 image load --daemon gcr.io/google-containers/addon-resizer:functional-918954 --alsologtostderr: (2.983315625s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.551092549s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-918954
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image load --daemon gcr.io/google-containers/addon-resizer:functional-918954 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-918954 image load --daemon gcr.io/google-containers/addon-resizer:functional-918954 --alsologtostderr: (3.279155846s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image save gcr.io/google-containers/addon-resizer:functional-918954 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image rm gcr.io/google-containers/addon-resizer:functional-918954 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-918954 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.058118589s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-918954
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-918954 image save --daemon gcr.io/google-containers/addon-resizer:functional-918954 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-918954
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-918954
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-918954
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-918954
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-388689 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-388689 --driver=docker  --container-runtime=docker: (34.449227545s)
--- PASS: TestImageBuild/serial/Setup (34.45s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-388689
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-388689: (1.784117934s)
--- PASS: TestImageBuild/serial/NormalBuild (1.78s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-388689
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-388689
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-388689
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-949398 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-949398 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m21.725939251s)
--- PASS: TestJSONOutput/start/Command (81.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-949398 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-949398 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-949398 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-949398 --output=json --user=testUser: (10.846750585s)
--- PASS: TestJSONOutput/stop/Command (10.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-959488 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-959488 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.656683ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bfadddb3-9c78-4f3e-8e44-1e876ed71ebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-959488] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"15053372-f28b-48ea-9ab6-92b25e6e6c17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17936"}}
	{"specversion":"1.0","id":"1344932b-bc82-49a0-a328-f5886275798c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e5b0221a-3fcd-4994-8aae-29006b9ece88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig"}}
	{"specversion":"1.0","id":"bb8b882f-d064-4a2f-8513-6a5b71ce99b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube"}}
	{"specversion":"1.0","id":"c4e18485-0c56-4b5b-b8ce-3091814fae1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9d4ca8ce-7294-4d1f-b9fd-a8e87a0214cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1dccacd5-fef2-4dce-a8f1-c4d92ee3d9ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-959488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-959488
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-720623 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-720623 --network=: (30.948177169s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-720623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-720623
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-720623: (2.078821681s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.05s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-818022 --network=bridge
E0216 17:04:50.354744    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-818022 --network=bridge: (29.86421378s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-818022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-818022
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-818022: (1.957895167s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.84s)

                                                
                                    
x
+
TestKicExistingNetwork (33.33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-543376 --network=existing-network
E0216 17:05:23.334816    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-543376 --network=existing-network: (31.209959308s)
helpers_test.go:175: Cleaning up "existing-network-543376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-543376
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-543376: (1.966749472s)
--- PASS: TestKicExistingNetwork (33.33s)

                                                
                                    
x
+
TestKicCustomSubnet (31.51s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-549691 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-549691 --subnet=192.168.60.0/24: (29.449163398s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-549691 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-549691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-549691
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-549691: (2.045957048s)
--- PASS: TestKicCustomSubnet (31.51s)

                                                
                                    
x
+
TestKicStaticIP (32.63s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-031620 --static-ip=192.168.200.200
E0216 17:06:13.402901    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-031620 --static-ip=192.168.200.200: (30.383079738s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-031620 ip
helpers_test.go:175: Cleaning up "static-ip-031620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-031620
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-031620: (2.079296718s)
--- PASS: TestKicStaticIP (32.63s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-066706 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-066706 --driver=docker  --container-runtime=docker: (30.981461846s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-069086 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-069086 --driver=docker  --container-runtime=docker: (32.384377542s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-066706
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-069086
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-069086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-069086
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-069086: (2.008514185s)
helpers_test.go:175: Cleaning up "first-066706" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-066706
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-066706: (2.019897988s)
--- PASS: TestMinikubeProfile (68.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-549244 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-549244 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.012027732s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-549244 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-551205 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-551205 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.208778479s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-551205 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-549244 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-549244 --alsologtostderr -v=5: (1.49321234s)
--- PASS: TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-551205 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-551205
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-551205: (1.207753079s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-551205
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-551205: (7.14375275s)
--- PASS: TestMountStart/serial/RestartStopped (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-551205 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881244 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881244 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m17.515445008s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (40.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-881244 -- rollout status deployment/busybox: (2.847131103s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0216 17:09:50.355556    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- exec busybox-5b5d89c9d6-52wdg -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- exec busybox-5b5d89c9d6-h87b6 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- exec busybox-5b5d89c9d6-52wdg -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- exec busybox-5b5d89c9d6-h87b6 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- exec busybox-5b5d89c9d6-52wdg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- exec busybox-5b5d89c9d6-h87b6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (40.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- exec busybox-5b5d89c9d6-52wdg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- exec busybox-5b5d89c9d6-52wdg -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- exec busybox-5b5d89c9d6-h87b6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881244 -- exec busybox-5b5d89c9d6-h87b6 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-881244 -v 3 --alsologtostderr
E0216 17:10:23.335755    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-881244 -v 3 --alsologtostderr: (17.537938301s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.28s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-881244 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp testdata/cp-test.txt multinode-881244:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp multinode-881244:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile547904591/001/cp-test_multinode-881244.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp multinode-881244:/home/docker/cp-test.txt multinode-881244-m02:/home/docker/cp-test_multinode-881244_multinode-881244-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m02 "sudo cat /home/docker/cp-test_multinode-881244_multinode-881244-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp multinode-881244:/home/docker/cp-test.txt multinode-881244-m03:/home/docker/cp-test_multinode-881244_multinode-881244-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m03 "sudo cat /home/docker/cp-test_multinode-881244_multinode-881244-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp testdata/cp-test.txt multinode-881244-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp multinode-881244-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile547904591/001/cp-test_multinode-881244-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp multinode-881244-m02:/home/docker/cp-test.txt multinode-881244:/home/docker/cp-test_multinode-881244-m02_multinode-881244.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244 "sudo cat /home/docker/cp-test_multinode-881244-m02_multinode-881244.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp multinode-881244-m02:/home/docker/cp-test.txt multinode-881244-m03:/home/docker/cp-test_multinode-881244-m02_multinode-881244-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m03 "sudo cat /home/docker/cp-test_multinode-881244-m02_multinode-881244-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp testdata/cp-test.txt multinode-881244-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp multinode-881244-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile547904591/001/cp-test_multinode-881244-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp multinode-881244-m03:/home/docker/cp-test.txt multinode-881244:/home/docker/cp-test_multinode-881244-m03_multinode-881244.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244 "sudo cat /home/docker/cp-test_multinode-881244-m03_multinode-881244.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 cp multinode-881244-m03:/home/docker/cp-test.txt multinode-881244-m02:/home/docker/cp-test_multinode-881244-m03_multinode-881244-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 ssh -n multinode-881244-m02 "sudo cat /home/docker/cp-test_multinode-881244-m03_multinode-881244-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-881244 node stop m03: (1.227329341s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881244 status: exit status 7 (534.574713ms)

                                                
                                                
-- stdout --
	multinode-881244
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-881244-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-881244-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881244 status --alsologtostderr: exit status 7 (559.957704ms)

                                                
                                                
-- stdout --
	multinode-881244
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-881244-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-881244-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:10:39.834212  112709 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:10:39.834381  112709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:10:39.834394  112709 out.go:304] Setting ErrFile to fd 2...
	I0216 17:10:39.834400  112709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:10:39.834697  112709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 17:10:39.834941  112709 out.go:298] Setting JSON to false
	I0216 17:10:39.835028  112709 mustload.go:65] Loading cluster: multinode-881244
	I0216 17:10:39.835081  112709 notify.go:220] Checking for updates...
	I0216 17:10:39.836430  112709 config.go:182] Loaded profile config "multinode-881244": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:10:39.836456  112709 status.go:255] checking status of multinode-881244 ...
	I0216 17:10:39.837094  112709 cli_runner.go:164] Run: docker container inspect multinode-881244 --format={{.State.Status}}
	I0216 17:10:39.853282  112709 status.go:330] multinode-881244 host status = "Running" (err=<nil>)
	I0216 17:10:39.853310  112709 host.go:66] Checking if "multinode-881244" exists ...
	I0216 17:10:39.853612  112709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-881244
	I0216 17:10:39.870800  112709 host.go:66] Checking if "multinode-881244" exists ...
	I0216 17:10:39.871217  112709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:10:39.871268  112709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-881244
	I0216 17:10:39.896181  112709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/multinode-881244/id_rsa Username:docker}
	I0216 17:10:39.993672  112709 ssh_runner.go:195] Run: systemctl --version
	I0216 17:10:39.997751  112709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:10:40.011706  112709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0216 17:10:40.078786  112709 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-16 17:10:40.067559071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0216 17:10:40.079364  112709 kubeconfig.go:92] found "multinode-881244" server: "https://192.168.58.2:8443"
	I0216 17:10:40.079392  112709 api_server.go:166] Checking apiserver status ...
	I0216 17:10:40.079443  112709 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0216 17:10:40.094292  112709 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2098/cgroup
	I0216 17:10:40.108523  112709 api_server.go:182] apiserver freezer: "5:freezer:/docker/7e864d93db933d3809047bd3269641096f222e0e0d374debcc18395fef275414/kubepods/burstable/podf039cd085f34afe99eca6afe3cd1003a/0b81f73ee636565a39653e1e916a3e4718c54d8990646744c411f0ae6e23c53a"
	I0216 17:10:40.108605  112709 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7e864d93db933d3809047bd3269641096f222e0e0d374debcc18395fef275414/kubepods/burstable/podf039cd085f34afe99eca6afe3cd1003a/0b81f73ee636565a39653e1e916a3e4718c54d8990646744c411f0ae6e23c53a/freezer.state
	I0216 17:10:40.129516  112709 api_server.go:204] freezer state: "THAWED"
	I0216 17:10:40.129546  112709 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0216 17:10:40.138617  112709 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0216 17:10:40.138644  112709 status.go:421] multinode-881244 apiserver status = Running (err=<nil>)
	I0216 17:10:40.138654  112709 status.go:257] multinode-881244 status: &{Name:multinode-881244 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0216 17:10:40.138671  112709 status.go:255] checking status of multinode-881244-m02 ...
	I0216 17:10:40.138981  112709 cli_runner.go:164] Run: docker container inspect multinode-881244-m02 --format={{.State.Status}}
	I0216 17:10:40.158143  112709 status.go:330] multinode-881244-m02 host status = "Running" (err=<nil>)
	I0216 17:10:40.158171  112709 host.go:66] Checking if "multinode-881244-m02" exists ...
	I0216 17:10:40.158469  112709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-881244-m02
	I0216 17:10:40.179025  112709 host.go:66] Checking if "multinode-881244-m02" exists ...
	I0216 17:10:40.179319  112709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0216 17:10:40.179361  112709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-881244-m02
	I0216 17:10:40.199711  112709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/17936-2208/.minikube/machines/multinode-881244-m02/id_rsa Username:docker}
	I0216 17:10:40.297691  112709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0216 17:10:40.309534  112709 status.go:257] multinode-881244-m02 status: &{Name:multinode-881244-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0216 17:10:40.309569  112709 status.go:255] checking status of multinode-881244-m03 ...
	I0216 17:10:40.309865  112709 cli_runner.go:164] Run: docker container inspect multinode-881244-m03 --format={{.State.Status}}
	I0216 17:10:40.325709  112709 status.go:330] multinode-881244-m03 host status = "Stopped" (err=<nil>)
	I0216 17:10:40.325732  112709 status.go:343] host is not running, skipping remaining checks
	I0216 17:10:40.325740  112709 status.go:257] multinode-881244-m03 status: &{Name:multinode-881244-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-881244 node start m03 --alsologtostderr: (13.061444989s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-881244
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-881244
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-881244: (22.538427544s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881244 --wait=true -v=8 --alsologtostderr
E0216 17:11:46.380771    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881244 --wait=true -v=8 --alsologtostderr: (1m39.930441282s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-881244
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-881244 node delete m03: (4.337714814s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-881244 stop: (21.389834588s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881244 status: exit status 7 (98.414596ms)

                                                
                                                
-- stdout --
	multinode-881244
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-881244-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881244 status --alsologtostderr: exit status 7 (99.626995ms)

                                                
                                                
-- stdout --
	multinode-881244
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-881244-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0216 17:13:23.390454  127566 out.go:291] Setting OutFile to fd 1 ...
	I0216 17:13:23.390762  127566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:13:23.390773  127566 out.go:304] Setting ErrFile to fd 2...
	I0216 17:13:23.390779  127566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0216 17:13:23.391047  127566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-2208/.minikube/bin
	I0216 17:13:23.391238  127566 out.go:298] Setting JSON to false
	I0216 17:13:23.391284  127566 mustload.go:65] Loading cluster: multinode-881244
	I0216 17:13:23.391722  127566 config.go:182] Loaded profile config "multinode-881244": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0216 17:13:23.391746  127566 status.go:255] checking status of multinode-881244 ...
	I0216 17:13:23.392251  127566 cli_runner.go:164] Run: docker container inspect multinode-881244 --format={{.State.Status}}
	I0216 17:13:23.392566  127566 notify.go:220] Checking for updates...
	I0216 17:13:23.409513  127566 status.go:330] multinode-881244 host status = "Stopped" (err=<nil>)
	I0216 17:13:23.409535  127566 status.go:343] host is not running, skipping remaining checks
	I0216 17:13:23.409543  127566 status.go:257] multinode-881244 status: &{Name:multinode-881244 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0216 17:13:23.409567  127566 status.go:255] checking status of multinode-881244-m02 ...
	I0216 17:13:23.409864  127566 cli_runner.go:164] Run: docker container inspect multinode-881244-m02 --format={{.State.Status}}
	I0216 17:13:23.429812  127566 status.go:330] multinode-881244-m02 host status = "Stopped" (err=<nil>)
	I0216 17:13:23.429835  127566 status.go:343] host is not running, skipping remaining checks
	I0216 17:13:23.429842  127566 status.go:257] multinode-881244-m02 status: &{Name:multinode-881244-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (88.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881244 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0216 17:14:50.355694    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881244 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m27.675150316s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881244 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (88.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-881244
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881244-m02 --driver=docker  --container-runtime=docker
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-881244-m02 --driver=docker  --container-runtime=docker: exit status 14 (105.538112ms)

                                                
                                                
-- stdout --
	* [multinode-881244-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-881244-m02' is duplicated with machine name 'multinode-881244-m02' in profile 'multinode-881244'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881244-m03 --driver=docker  --container-runtime=docker
E0216 17:15:23.336178    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881244-m03 --driver=docker  --container-runtime=docker: (36.063838126s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-881244
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-881244: exit status 80 (375.035199ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-881244
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-881244-m03 already exists in multinode-881244-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-881244-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-881244-m03: (2.286870443s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.90s)

                                                
                                    
x
+
TestPreload (176.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-944703 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-944703 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m43.370205769s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-944703 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-944703 image pull gcr.io/k8s-minikube/busybox: (1.462167931s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-944703
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-944703: (10.84328259s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-944703 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-944703 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (58.116466648s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-944703 image list
helpers_test.go:175: Cleaning up "test-preload-944703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-944703
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-944703: (2.174914921s)
--- PASS: TestPreload (176.20s)

                                                
                                    
x
+
TestScheduledStopUnix (106.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-916933 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-916933 --memory=2048 --driver=docker  --container-runtime=docker: (32.989969463s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-916933 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-916933 -n scheduled-stop-916933
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-916933 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-916933 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-916933 -n scheduled-stop-916933
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-916933
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-916933 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0216 17:19:50.355270    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-916933
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-916933: exit status 7 (73.782127ms)

                                                
                                                
-- stdout --
	scheduled-stop-916933
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-916933 -n scheduled-stop-916933
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-916933 -n scheduled-stop-916933: exit status 7 (76.844993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-916933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-916933
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-916933: (1.630553305s)
--- PASS: TestScheduledStopUnix (106.22s)

                                                
                                    
x
+
TestSkaffold (119.58s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1960101853 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-756325 --memory=2600 --driver=docker  --container-runtime=docker
E0216 17:20:23.336152    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-756325 --memory=2600 --driver=docker  --container-runtime=docker: (30.369828901s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1960101853 run --minikube-profile skaffold-756325 --kube-context skaffold-756325 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1960101853 run --minikube-profile skaffold-756325 --kube-context skaffold-756325 --status-check=true --port-forward=false --interactive=false: (1m12.609184719s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-67498cf465-z6dbm" [6f4220db-51c8-4628-b86c-ad1672a8062c] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003403167s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-754545877d-6gjjj" [20988054-ad9c-4caf-a837-3504ca33a5b0] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00370233s
helpers_test.go:175: Cleaning up "skaffold-756325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-756325
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-756325: (3.055211736s)
--- PASS: TestSkaffold (119.58s)

                                                
                                    
x
+
TestInsufficientStorage (10.8s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-055975 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-055975 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.502655004s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9581304b-47a8-49f6-82c4-e8f9f58ac5f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-055975] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"40a0cd4e-8793-43d8-968b-81092ed7d4b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17936"}}
	{"specversion":"1.0","id":"d6caf361-38f7-4170-8e2a-57941a7e054f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8c846302-306b-4115-93da-561124183e85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig"}}
	{"specversion":"1.0","id":"891e1117-7a8a-4e00-a325-01bb2de15aee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube"}}
	{"specversion":"1.0","id":"62fe4d66-8fad-4d73-9ebb-3f3f4f5ee9ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2b8c8677-ce2f-4ef1-9e29-b96f48d974cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c214a7a8-923b-4956-94d0-0f2a4a006b91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bd18d960-e348-4aec-a0e6-0b0cdd699ea5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0c65d6c5-f78f-41b1-8e17-3179d829107b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b719a52b-d292-41d7-9e86-3e27faf8bed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"756c684e-6064-4eb1-a6d9-f36920d2b51a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-055975 in cluster insufficient-storage-055975","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d95bcd51-e683-4876-8ffd-4c7e14834502","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708008208-17936 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ab045c9-74df-4f0f-974b-b9bc940a3d0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0aa22622-464f-44b6-bb17-559410d112b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-055975 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-055975 --output=json --layout=cluster: exit status 7 (302.067832ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-055975","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-055975","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:22:25.946378  162123 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-055975" does not appear in /home/jenkins/minikube-integration/17936-2208/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-055975 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-055975 --output=json --layout=cluster: exit status 7 (311.612539ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-055975","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-055975","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0216 17:22:26.258678  162178 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-055975" does not appear in /home/jenkins/minikube-integration/17936-2208/kubeconfig
	E0216 17:22:26.268814  162178 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/insufficient-storage-055975/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-055975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-055975
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-055975: (1.686286359s)
--- PASS: TestInsufficientStorage (10.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2973135943 start -p running-upgrade-750981 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2973135943 start -p running-upgrade-750981 --memory=2200 --vm-driver=docker  --container-runtime=docker: (35.736246147s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-750981 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-750981 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.807663647s)
helpers_test.go:175: Cleaning up "running-upgrade-750981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-750981
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-750981: (2.249617062s)
--- PASS: TestRunningBinaryUpgrade (71.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (142.09s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.308304024 start -p missing-upgrade-381686 --memory=2200 --driver=docker  --container-runtime=docker
E0216 17:30:23.334827    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.308304024 start -p missing-upgrade-381686 --memory=2200 --driver=docker  --container-runtime=docker: (1m12.685870515s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-381686
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-381686: (10.350813018s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-381686
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-381686 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0216 17:32:03.078092    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:32:30.762010    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-381686 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (53.826532469s)
helpers_test.go:175: Cleaning up "missing-upgrade-381686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-381686
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-381686: (2.123150848s)
--- PASS: TestMissingContainerUpgrade (142.09s)

                                                
                                    
x
+
TestPause/serial/Start (95.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-714133 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0216 17:22:53.403313    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-714133 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m35.46394235s)
--- PASS: TestPause/serial/Start (95.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-714133 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-714133 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.604083268s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-585678 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-585678 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (108.932412ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-585678] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17936
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17936-2208/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-2208/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-585678 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-585678 --driver=docker  --container-runtime=docker: (38.941145197s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-585678 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-585678 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-585678 --no-kubernetes --driver=docker  --container-runtime=docker: (14.705664114s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-585678 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-585678 status -o json: exit status 2 (287.951285ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-585678","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-585678
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-585678: (1.727844184s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.72s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-714133 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-714133 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-714133 --output=json --layout=cluster: exit status 2 (318.623615ms)

                                                
                                                
-- stdout --
	{"Name":"pause-714133","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-714133","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.59s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-714133 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.59s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-714133 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-714133 --alsologtostderr -v=5
E0216 17:24:50.354855    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-714133 --alsologtostderr -v=5: (2.052390981s)
--- PASS: TestPause/serial/DeletePaused (2.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (14.333936458s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-714133
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-714133: exit status 1 (21.085695ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-714133: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-585678 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-585678 --no-kubernetes --driver=docker  --container-runtime=docker: (7.589250237s)
--- PASS: TestNoKubernetes/serial/Start (7.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-585678 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-585678 "sudo systemctl is-active --quiet service kubelet": exit status 1 (354.65784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-585678
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-585678: (1.264364751s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-585678 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-585678 --driver=docker  --container-runtime=docker: (8.72498062s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-585678 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-585678 "sudo systemctl is-active --quiet service kubelet": exit status 1 (349.466537ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3772262592 start -p stopped-upgrade-929338 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3772262592 start -p stopped-upgrade-929338 --memory=2200 --vm-driver=docker  --container-runtime=docker: (35.858660846s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3772262592 -p stopped-upgrade-929338 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3772262592 -p stopped-upgrade-929338 stop: (10.731178116s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-929338 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0216 17:34:50.355642    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-929338 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.17505753s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-929338
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-929338: (1.294675657s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0216 17:35:23.334630    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m25.194988957s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-850655 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-850655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ddbsh" [481a0835-d15c-4bb9-931c-d22ae14734d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ddbsh" [481a0835-d15c-4bb9-931c-d22ae14734d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004785905s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-850655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m12.782906465s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m20.669767974s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qts9t" [3465fe9d-95bc-47b1-aeab-c0f0459a2540] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004774221s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-850655 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-850655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cvnpm" [1e8a1d21-c931-43ee-a83c-4fe795030bb4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cvnpm" [1e8a1d21-c931-43ee-a83c-4fe795030bb4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004195802s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-78fcn" [69c93850-308a-4517-a724-ae0c6c66957d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005749516s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-850655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-850655 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-850655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zxkjg" [71c79af3-7bdb-46f9-bc92-c97016f71316] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zxkjg" [71c79af3-7bdb-46f9-bc92-c97016f71316] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004243882s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-850655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m8.098238807s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (93.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0216 17:39:33.403570    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 17:39:50.355479    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m33.166575641s)
--- PASS: TestNetworkPlugins/group/false/Start (93.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-850655 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-850655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4h5rw" [2cbda2ac-aa6a-4165-8d8d-3261777d1510] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4h5rw" [2cbda2ac-aa6a-4165-8d8d-3261777d1510] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004363492s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-850655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (56.638478787s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-850655 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-850655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-555kz" [9def142b-477b-4c97-a904-35b10ae67176] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-555kz" [9def142b-477b-4c97-a904-35b10ae67176] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004059128s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-850655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0216 17:41:32.594661    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
E0216 17:41:33.874900    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
E0216 17:41:36.435854    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
E0216 17:41:41.556591    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m8.122122917s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-850655 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-850655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dn7wx" [a717dc75-eacd-439f-8859-4500cfb1da74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dn7wx" [a717dc75-eacd-439f-8859-4500cfb1da74] Running
E0216 17:41:51.797020    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004190923s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-850655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (51.307214438s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6wk4m" [97bc54f7-362c-4baf-a89d-d793d0648daf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004804818s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-850655 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-850655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f9x7x" [50140cfa-8c7d-4f0a-aa54-998da2827214] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f9x7x" [50140cfa-8c7d-4f0a-aa54-998da2827214] Running
E0216 17:42:53.237621    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.008406925s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-850655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-850655 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-850655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tcl6n" [7aa381da-a1b5-4265-9fff-b02dec109357] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tcl6n" [7aa381da-a1b5-4265-9fff-b02dec109357] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005496831s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-850655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (92.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0216 17:43:22.464557    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:43:23.745439    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:43:26.122448    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:43:26.305905    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:43:31.426920    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:43:37.211482    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:37.222634    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:37.233608    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:37.254627    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:37.295120    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:37.377558    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:37.539025    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:37.859340    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:38.499771    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:39.780503    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:43:41.668002    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:43:42.341496    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-850655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m32.588058115s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (92.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-850655 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-850655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p75ln" [df38d59c-b6ef-4914-abf1-5bc5afcb49d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0216 17:44:59.143106    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-p75ln" [df38d59c-b6ef-4914-abf1-5bc5afcb49d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.00364942s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-850655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-850655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-323647 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0216 17:45:32.053039    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
E0216 17:45:52.533851    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
E0216 17:45:56.303536    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:45:56.308791    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:45:56.319015    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:45:56.339278    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:45:56.379539    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:45:56.459795    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:45:56.620144    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:45:56.940643    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:45:57.581799    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:45:58.862413    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:46:01.423329    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:46:05.029604    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:46:06.544338    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:46:16.784589    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:46:21.063338    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-323647 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (54.991661109s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-323647 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9c8d48fa-5a29-4df4-bf44-9721ca14decd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9c8d48fa-5a29-4df4-bf44-9721ca14decd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004053812s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-323647 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-323647 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-323647 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-323647 --alsologtostderr -v=3
E0216 17:46:31.316490    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
E0216 17:46:33.494953    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
E0216 17:46:37.265152    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-323647 --alsologtostderr -v=3: (10.784738214s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-323647 -n no-preload-323647
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-323647 -n no-preload-323647: exit status 7 (89.948931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-323647 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (314.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-323647 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0216 17:46:42.639362    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:42.644661    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:42.655018    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:42.675267    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:42.715512    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:42.795758    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:42.956103    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:43.276910    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:43.917984    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:45.198267    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:47.758444    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:52.878599    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:46:58.999432    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
E0216 17:47:03.077683    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 17:47:03.118979    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:47:18.225388    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:47:23.599447    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:47:40.171628    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:40.177051    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:40.187288    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:40.207587    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:40.247843    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:40.328188    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:40.488571    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:40.808966    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:41.449126    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:42.729837    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:45.290480    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:50.411168    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:47:55.415965    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
E0216 17:48:00.651692    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:48:04.559637    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:48:09.423930    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:09.429314    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:09.439487    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:09.459721    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:09.500025    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:09.580308    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:09.740670    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:10.061273    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:10.702440    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:11.982608    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:14.543693    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:19.663912    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:21.132585    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:48:21.177740    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:48:29.904503    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:48:37.211043    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:48:40.145840    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:48:48.870380    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:48:50.385388    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:49:02.093088    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:49:04.903544    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 17:49:26.480239    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 17:49:31.346346    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:49:50.355675    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 17:49:55.529703    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:49:55.535013    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:49:55.545269    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:49:55.565646    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:49:55.605944    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:49:55.686267    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:49:55.846604    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:49:56.167138    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:49:56.808360    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:49:58.088838    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:50:00.649270    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:50:05.770203    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:50:11.567645    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
E0216 17:50:16.010418    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:50:23.334364    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 17:50:24.013632    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:50:36.490680    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:50:39.257000    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
E0216 17:50:53.266884    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:50:56.303487    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:51:17.450860    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 17:51:23.986776    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 17:51:31.317190    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
E0216 17:51:42.639116    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-323647 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (5m14.469602023s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-323647 -n no-preload-323647
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (314.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2dw67" [639e8e79-c207-4833-b97f-315cc7ab2352] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2dw67" [639e8e79-c207-4833-b97f-315cc7ab2352] Running
E0216 17:52:03.078017    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004339908s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2dw67" [639e8e79-c207-4833-b97f-315cc7ab2352] Running
E0216 17:52:10.321333    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003688289s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-323647 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-323647 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-323647 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-323647 -n no-preload-323647
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-323647 -n no-preload-323647: exit status 2 (337.756882ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-323647 -n no-preload-323647
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-323647 -n no-preload-323647: exit status 2 (315.831006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-323647 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-323647 -n no-preload-323647
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-323647 -n no-preload-323647
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-198397 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-198397 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (45.621970538s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-198397 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b72cbd6e-4260-40e6-b08d-5af59a39d46a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b72cbd6e-4260-40e6-b08d-5af59a39d46a] Running
E0216 17:53:07.853938    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 17:53:09.424163    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003630802s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-198397 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-198397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-198397 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-198397 --alsologtostderr -v=3
E0216 17:53:21.177866    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-198397 --alsologtostderr -v=3: (10.901387463s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-198397 -n embed-certs-198397
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-198397 -n embed-certs-198397: exit status 7 (79.8006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-198397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (316.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-198397 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0216 17:53:37.107693    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 17:53:37.211072    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-198397 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m16.028703405s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-198397 -n embed-certs-198397
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (316.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-488384 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-488384 --alsologtostderr -v=3: (1.235451983s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-488384 -n old-k8s-version-488384: exit status 7 (96.615128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-488384 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sw8kk" [afb1ee14-140d-420b-9678-0b04f14d6814] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sw8kk" [afb1ee14-140d-420b-9678-0b04f14d6814] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.00411944s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sw8kk" [afb1ee14-140d-420b-9678-0b04f14d6814] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004076519s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-198397 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-198397 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-198397 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-198397 -n embed-certs-198397
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-198397 -n embed-certs-198397: exit status 2 (343.698995ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-198397 -n embed-certs-198397
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-198397 -n embed-certs-198397: exit status 2 (344.421102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-198397 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-198397 -n embed-certs-198397
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-198397 -n embed-certs-198397
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-396551 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0216 17:59:05.385642    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 17:59:44.231225    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 17:59:50.354774    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 17:59:55.530201    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 18:00:00.264599    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 18:00:06.123611    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 18:00:11.567758    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
E0216 18:00:23.334751    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-396551 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (1m24.069563716s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-396551 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fdacb12f-8c87-4ae1-b959-1a40a72a244e] Pending
helpers_test.go:344: "busybox" [fdacb12f-8c87-4ae1-b959-1a40a72a244e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fdacb12f-8c87-4ae1-b959-1a40a72a244e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004058482s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-396551 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-396551 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-396551 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-396551 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-396551 --alsologtostderr -v=3: (10.901910706s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-396551 -n default-k8s-diff-port-396551
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-396551 -n default-k8s-diff-port-396551: exit status 7 (96.228879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-396551 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-396551 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0216 18:00:56.303788    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 18:01:21.543382    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 18:01:31.317119    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
E0216 18:01:34.617131    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
E0216 18:01:42.639676    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 18:01:46.382878    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 18:01:49.225870    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 18:02:03.077406    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
E0216 18:02:19.346993    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 18:02:40.171080    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 18:03:05.682229    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
E0216 18:03:09.423695    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 18:03:21.178005    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kindnet-850655/client.crt: no such file or directory
E0216 18:03:37.211004    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/calico-850655/client.crt: no such file or directory
E0216 18:04:03.214149    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/flannel-850655/client.crt: no such file or directory
E0216 18:04:32.468095    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/bridge-850655/client.crt: no such file or directory
E0216 18:04:50.355734    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/addons-105162/client.crt: no such file or directory
E0216 18:04:55.530124    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 18:05:11.567828    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/custom-flannel-850655/client.crt: no such file or directory
E0216 18:05:23.334215    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/functional-918954/client.crt: no such file or directory
E0216 18:05:56.303567    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/false-850655/client.crt: no such file or directory
E0216 18:06:18.572459    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/kubenet-850655/client.crt: no such file or directory
E0216 18:06:21.543069    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/no-preload-323647/client.crt: no such file or directory
E0216 18:06:31.316573    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/auto-850655/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-396551 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m43.662449224s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-396551 -n default-k8s-diff-port-396551
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zf8dt" [126ef3b7-090c-4653-9977-26ce7b16efc3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0216 18:06:42.639692    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/enable-default-cni-850655/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zf8dt" [126ef3b7-090c-4653-9977-26ce7b16efc3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.003788254s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zf8dt" [126ef3b7-090c-4653-9977-26ce7b16efc3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005506266s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-396551 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-396551 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-396551 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-396551 -n default-k8s-diff-port-396551
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-396551 -n default-k8s-diff-port-396551: exit status 2 (330.345042ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-396551 -n default-k8s-diff-port-396551
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-396551 -n default-k8s-diff-port-396551: exit status 2 (341.509499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-396551 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-396551 -n default-k8s-diff-port-396551
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-396551 -n default-k8s-diff-port-396551
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-474812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0216 18:07:03.077739    7513 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-2208/.minikube/profiles/skaffold-756325/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-474812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (46.007124091s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-474812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-474812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.200796284s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-474812 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-474812 --alsologtostderr -v=3: (9.128043031s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474812 -n newest-cni-474812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474812 -n newest-cni-474812: exit status 7 (76.569968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-474812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-474812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-474812 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (33.484485222s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474812 -n newest-cni-474812
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-474812 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-474812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-474812 -n newest-cni-474812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-474812 -n newest-cni-474812: exit status 2 (332.221289ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-474812 -n newest-cni-474812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-474812 -n newest-cni-474812: exit status 2 (316.659371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-474812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-474812 -n newest-cni-474812
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-474812 -n newest-cni-474812
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.78s)

                                                
                                    

Test skip (27/330)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-797413 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-797413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-797413
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-850655 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-850655" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-850655

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-850655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-850655"

                                                
                                                
----------------------- debugLogs end: cilium-850655 [took: 5.060089835s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-850655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-850655
--- SKIP: TestNetworkPlugins/group/cilium (5.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-083322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-083322
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard