Test Report: Docker_Linux_docker_arm64 18647

                    
                      cbf61390ee716906db88190ad6530e4e486e1432:2024-04-16:34045
                    
                

Test fail (2/350)

Order failed test Duration
39 TestAddons/parallel/Ingress 36.84
380 TestStartStop/group/old-k8s-version/serial/SecondStart 371.23
x
+
TestAddons/parallel/Ingress (36.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-716538 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-716538 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-716538 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5f6d0e0b-5dd2-4f26-8013-d61f79bdd60f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5f6d0e0b-5dd2-4f26-8013-d61f79bdd60f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00448533s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-716538 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.098176796s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-716538 addons disable ingress --alsologtostderr -v=1: (7.731128431s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-716538
helpers_test.go:235: (dbg) docker inspect addons-716538:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fba348aa66fc67f0d2415b09541d5d5778dcb3dceb72c6fc1bc8146b7264f2cf",
	        "Created": "2024-04-15T23:38:18.046390973Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-15T23:38:18.398307508Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:05b5b2cbc7157bfe11e03d0beeaf25e36e83e7ad2b499390548ca8693c4ec20b",
	        "ResolvConfPath": "/var/lib/docker/containers/fba348aa66fc67f0d2415b09541d5d5778dcb3dceb72c6fc1bc8146b7264f2cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fba348aa66fc67f0d2415b09541d5d5778dcb3dceb72c6fc1bc8146b7264f2cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/fba348aa66fc67f0d2415b09541d5d5778dcb3dceb72c6fc1bc8146b7264f2cf/hosts",
	        "LogPath": "/var/lib/docker/containers/fba348aa66fc67f0d2415b09541d5d5778dcb3dceb72c6fc1bc8146b7264f2cf/fba348aa66fc67f0d2415b09541d5d5778dcb3dceb72c6fc1bc8146b7264f2cf-json.log",
	        "Name": "/addons-716538",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-716538:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-716538",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5424c1046bec88135a1a415790c93f090b4c8d1152590dbe1976c3625be9e580-init/diff:/var/lib/docker/overlay2/d2fb7d5dfad483877edf794e760fbf311a1d68be07bb2438f714c78875e64b61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5424c1046bec88135a1a415790c93f090b4c8d1152590dbe1976c3625be9e580/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5424c1046bec88135a1a415790c93f090b4c8d1152590dbe1976c3625be9e580/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5424c1046bec88135a1a415790c93f090b4c8d1152590dbe1976c3625be9e580/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-716538",
	                "Source": "/var/lib/docker/volumes/addons-716538/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-716538",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-716538",
	                "name.minikube.sigs.k8s.io": "addons-716538",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2dad35a0fa6210f61c6bac6f7839685369cad3958a05433a52152397ebdee04a",
	            "SandboxKey": "/var/run/docker/netns/2dad35a0fa62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-716538": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "b80d77fde6b40e385232adf3bb09cb8d6addd6f645a774f5cf1373d587905afc",
	                    "EndpointID": "3731eac35f01c8a0099be273ba61ca7733001cb13010e6b5b659df6d3cff2263",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-716538",
	                        "fba348aa66fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-716538 -n addons-716538
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-716538 logs -n 25: (1.156793231s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-716845              | download-only-716845   | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| start   | -o=json --download-only              | download-only-788547   | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | -p download-only-788547              |                        |         |                |                     |                     |
	|         | --force --alsologtostderr            |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2    |                        |         |                |                     |                     |
	|         | --container-runtime=docker           |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=docker           |                        |         |                |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| delete  | -p download-only-788547              | download-only-788547   | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| delete  | -p download-only-781968              | download-only-781968   | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| delete  | -p download-only-716845              | download-only-716845   | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| delete  | -p download-only-788547              | download-only-788547   | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| start   | --download-only -p                   | download-docker-774235 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | download-docker-774235               |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=docker           |                        |         |                |                     |                     |
	| delete  | -p download-docker-774235            | download-docker-774235 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| start   | --download-only -p                   | binary-mirror-763588   | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | binary-mirror-763588                 |                        |         |                |                     |                     |
	|         | --alsologtostderr                    |                        |         |                |                     |                     |
	|         | --binary-mirror                      |                        |         |                |                     |                     |
	|         | http://127.0.0.1:35985               |                        |         |                |                     |                     |
	|         | --driver=docker                      |                        |         |                |                     |                     |
	|         | --container-runtime=docker           |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-763588              | binary-mirror-763588   | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| addons  | enable dashboard -p                  | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | addons-716538                        |                        |         |                |                     |                     |
	| addons  | disable dashboard -p                 | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | addons-716538                        |                        |         |                |                     |                     |
	| start   | -p addons-716538 --wait=true         | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:40 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |                |                     |                     |
	|         | --addons=registry                    |                        |         |                |                     |                     |
	|         | --addons=metrics-server              |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |                |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |                |                     |                     |
	|         |  --container-runtime=docker          |                        |         |                |                     |                     |
	|         | --addons=ingress                     |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |                |                     |                     |
	| ip      | addons-716538 ip                     | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:40 UTC | 15 Apr 24 23:40 UTC |
	| addons  | addons-716538 addons disable         | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:40 UTC | 15 Apr 24 23:40 UTC |
	|         | registry --alsologtostderr           |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-716538 addons                 | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:40 UTC | 15 Apr 24 23:40 UTC |
	|         | disable metrics-server               |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:40 UTC | 15 Apr 24 23:40 UTC |
	|         | addons-716538                        |                        |         |                |                     |                     |
	| ssh     | addons-716538 ssh curl -s            | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:41 UTC | 15 Apr 24 23:41 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |                |                     |                     |
	|         | nginx.example.com'                   |                        |         |                |                     |                     |
	| ip      | addons-716538 ip                     | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:41 UTC | 15 Apr 24 23:41 UTC |
	| addons  | addons-716538 addons                 | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:41 UTC | 15 Apr 24 23:41 UTC |
	|         | disable csi-hostpath-driver          |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | addons-716538 addons                 | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:41 UTC | 15 Apr 24 23:41 UTC |
	|         | disable volumesnapshots              |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |                |                     |                     |
	| addons  | addons-716538 addons disable         | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:41 UTC | 15 Apr 24 23:41 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |                |                     |                     |
	|         | -v=1                                 |                        |         |                |                     |                     |
	| addons  | addons-716538 addons disable         | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:41 UTC | 15 Apr 24 23:41 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-716538          | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:41 UTC | 15 Apr 24 23:41 UTC |
	|         | -p addons-716538                     |                        |         |                |                     |                     |
	|---------|--------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 23:37:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 23:37:53.788966    8402 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:37:53.789127    8402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:37:53.789138    8402 out.go:304] Setting ErrFile to fd 2...
	I0415 23:37:53.789143    8402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:37:53.789398    8402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0415 23:37:53.789841    8402 out.go:298] Setting JSON to false
	I0415 23:37:53.790559    8402 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1209,"bootTime":1713223065,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0415 23:37:53.790627    8402 start.go:139] virtualization:  
	I0415 23:37:53.800062    8402 out.go:177] * [addons-716538] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0415 23:37:53.807591    8402 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 23:37:53.812652    8402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:37:53.807718    8402 notify.go:220] Checking for updates...
	I0415 23:37:53.824274    8402 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0415 23:37:53.831150    8402 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	I0415 23:37:53.835682    8402 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0415 23:37:53.842988    8402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 23:37:53.847776    8402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:37:53.866581    8402 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 23:37:53.866713    8402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:37:53.935162    8402 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-15 23:37:53.925933926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:37:53.935297    8402 docker.go:295] overlay module found
	I0415 23:37:53.945159    8402 out.go:177] * Using the docker driver based on user configuration
	I0415 23:37:53.956964    8402 start.go:297] selected driver: docker
	I0415 23:37:53.956984    8402 start.go:901] validating driver "docker" against <nil>
	I0415 23:37:53.956998    8402 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 23:37:53.957612    8402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:37:54.034130    8402 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-15 23:37:54.024292114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:37:54.034306    8402 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 23:37:54.034547    8402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 23:37:54.050629    8402 out.go:177] * Using Docker driver with root privileges
	I0415 23:37:54.062108    8402 cni.go:84] Creating CNI manager for ""
	I0415 23:37:54.062156    8402 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 23:37:54.062166    8402 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 23:37:54.062261    8402 start.go:340] cluster config:
	{Name:addons-716538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-716538 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:37:54.070922    8402 out.go:177] * Starting "addons-716538" primary control-plane node in "addons-716538" cluster
	I0415 23:37:54.083376    8402 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 23:37:54.090850    8402 out.go:177] * Pulling base image v0.0.43-1713215244-18647 ...
	I0415 23:37:54.102087    8402 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 23:37:54.102119    8402 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 23:37:54.102162    8402 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 23:37:54.102172    8402 cache.go:56] Caching tarball of preloaded images
	I0415 23:37:54.102275    8402 preload.go:173] Found /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0415 23:37:54.102285    8402 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0415 23:37:54.102625    8402 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/config.json ...
	I0415 23:37:54.102655    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/config.json: {Name:mkf3f2e4b15aaf7499039d515a93c251a944cb71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:37:54.117018    8402 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af to local cache
	I0415 23:37:54.117152    8402 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory
	I0415 23:37:54.117176    8402 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory, skipping pull
	I0415 23:37:54.117183    8402 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in cache, skipping pull
	I0415 23:37:54.117192    8402 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af as a tarball
	I0415 23:37:54.117204    8402 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af from local cache
	I0415 23:38:11.047399    8402 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af from cached tarball
	I0415 23:38:11.047439    8402 cache.go:194] Successfully downloaded all kic artifacts
	I0415 23:38:11.047477    8402 start.go:360] acquireMachinesLock for addons-716538: {Name:mkb868ce7274b10d3fc1b1b5f86ebb914b5c96c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:38:11.047608    8402 start.go:364] duration metric: took 108.6µs to acquireMachinesLock for "addons-716538"
	I0415 23:38:11.047638    8402 start.go:93] Provisioning new machine with config: &{Name:addons-716538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-716538 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 23:38:11.047713    8402 start.go:125] createHost starting for "" (driver="docker")
	I0415 23:38:11.050298    8402 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0415 23:38:11.050564    8402 start.go:159] libmachine.API.Create for "addons-716538" (driver="docker")
	I0415 23:38:11.050601    8402 client.go:168] LocalClient.Create starting
	I0415 23:38:11.050718    8402 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem
	I0415 23:38:11.574445    8402 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/cert.pem
	I0415 23:38:11.740271    8402 cli_runner.go:164] Run: docker network inspect addons-716538 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0415 23:38:11.755193    8402 cli_runner.go:211] docker network inspect addons-716538 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0415 23:38:11.755297    8402 network_create.go:281] running [docker network inspect addons-716538] to gather additional debugging logs...
	I0415 23:38:11.755318    8402 cli_runner.go:164] Run: docker network inspect addons-716538
	W0415 23:38:11.769069    8402 cli_runner.go:211] docker network inspect addons-716538 returned with exit code 1
	I0415 23:38:11.769100    8402 network_create.go:284] error running [docker network inspect addons-716538]: docker network inspect addons-716538: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-716538 not found
	I0415 23:38:11.769113    8402 network_create.go:286] output of [docker network inspect addons-716538]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-716538 not found
	
	** /stderr **
	I0415 23:38:11.769230    8402 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 23:38:11.783575    8402 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40029307b0}
	I0415 23:38:11.783614    8402 network_create.go:124] attempt to create docker network addons-716538 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0415 23:38:11.783677    8402 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-716538 addons-716538
	I0415 23:38:11.849082    8402 network_create.go:108] docker network addons-716538 192.168.49.0/24 created
	I0415 23:38:11.849128    8402 kic.go:121] calculated static IP "192.168.49.2" for the "addons-716538" container
	I0415 23:38:11.849216    8402 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0415 23:38:11.862259    8402 cli_runner.go:164] Run: docker volume create addons-716538 --label name.minikube.sigs.k8s.io=addons-716538 --label created_by.minikube.sigs.k8s.io=true
	I0415 23:38:11.877487    8402 oci.go:103] Successfully created a docker volume addons-716538
	I0415 23:38:11.877582    8402 cli_runner.go:164] Run: docker run --rm --name addons-716538-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-716538 --entrypoint /usr/bin/test -v addons-716538:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib
	I0415 23:38:13.957651    8402 cli_runner.go:217] Completed: docker run --rm --name addons-716538-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-716538 --entrypoint /usr/bin/test -v addons-716538:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -d /var/lib: (2.080029182s)
	I0415 23:38:13.957683    8402 oci.go:107] Successfully prepared a docker volume addons-716538
	I0415 23:38:13.957714    8402 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 23:38:13.957732    8402 kic.go:194] Starting extracting preloaded images to volume ...
	I0415 23:38:13.957818    8402 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-716538:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir
	I0415 23:38:17.969414    8402 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-716538:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af -I lz4 -xf /preloaded.tar -C /extractDir: (4.011552127s)
	I0415 23:38:17.969455    8402 kic.go:203] duration metric: took 4.011718121s to extract preloaded images to volume ...
	W0415 23:38:17.969593    8402 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0415 23:38:17.969707    8402 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0415 23:38:18.030350    8402 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-716538 --name addons-716538 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-716538 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-716538 --network addons-716538 --ip 192.168.49.2 --volume addons-716538:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af
	I0415 23:38:18.408112    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Running}}
	I0415 23:38:18.432445    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:18.452366    8402 cli_runner.go:164] Run: docker exec addons-716538 stat /var/lib/dpkg/alternatives/iptables
	I0415 23:38:18.534750    8402 oci.go:144] the created container "addons-716538" has a running status.
	I0415 23:38:18.534777    8402 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa...
	I0415 23:38:19.035489    8402 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0415 23:38:19.054880    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:19.074890    8402 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0415 23:38:19.074927    8402 kic_runner.go:114] Args: [docker exec --privileged addons-716538 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0415 23:38:19.153997    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:19.174003    8402 machine.go:94] provisionDockerMachine start ...
	I0415 23:38:19.174092    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:19.196759    8402 main.go:141] libmachine: Using SSH client type: native
	I0415 23:38:19.197027    8402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0415 23:38:19.197042    8402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0415 23:38:19.359592    8402 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-716538
	
	I0415 23:38:19.359618    8402 ubuntu.go:169] provisioning hostname "addons-716538"
	I0415 23:38:19.359698    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:19.383018    8402 main.go:141] libmachine: Using SSH client type: native
	I0415 23:38:19.383366    8402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0415 23:38:19.383387    8402 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-716538 && echo "addons-716538" | sudo tee /etc/hostname
	I0415 23:38:19.539687    8402 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-716538
	
	I0415 23:38:19.539769    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:19.555340    8402 main.go:141] libmachine: Using SSH client type: native
	I0415 23:38:19.555594    8402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0415 23:38:19.555616    8402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-716538' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-716538/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-716538' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 23:38:19.699176    8402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:38:19.699226    8402 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18647-2210/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-2210/.minikube}
	I0415 23:38:19.699269    8402 ubuntu.go:177] setting up certificates
	I0415 23:38:19.699280    8402 provision.go:84] configureAuth start
	I0415 23:38:19.699342    8402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-716538
	I0415 23:38:19.714976    8402 provision.go:143] copyHostCerts
	I0415 23:38:19.715056    8402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-2210/.minikube/ca.pem (1078 bytes)
	I0415 23:38:19.715218    8402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-2210/.minikube/cert.pem (1123 bytes)
	I0415 23:38:19.715296    8402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-2210/.minikube/key.pem (1679 bytes)
	I0415 23:38:19.715363    8402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-2210/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca-key.pem org=jenkins.addons-716538 san=[127.0.0.1 192.168.49.2 addons-716538 localhost minikube]
	I0415 23:38:19.935924    8402 provision.go:177] copyRemoteCerts
	I0415 23:38:19.935987    8402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 23:38:19.936027    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:19.953448    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:20.072364    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0415 23:38:20.098518    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 23:38:20.126232    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 23:38:20.151636    8402 provision.go:87] duration metric: took 452.3386ms to configureAuth
	I0415 23:38:20.151664    8402 ubuntu.go:193] setting minikube options for container-runtime
	I0415 23:38:20.151847    8402 config.go:182] Loaded profile config "addons-716538": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 23:38:20.151913    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:20.168022    8402 main.go:141] libmachine: Using SSH client type: native
	I0415 23:38:20.168272    8402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0415 23:38:20.168287    8402 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0415 23:38:20.315381    8402 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0415 23:38:20.315402    8402 ubuntu.go:71] root file system type: overlay
	I0415 23:38:20.315521    8402 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0415 23:38:20.315593    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:20.331245    8402 main.go:141] libmachine: Using SSH client type: native
	I0415 23:38:20.331508    8402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0415 23:38:20.331589    8402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0415 23:38:20.486342    8402 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0415 23:38:20.486422    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:20.503770    8402 main.go:141] libmachine: Using SSH client type: native
	I0415 23:38:20.504072    8402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0415 23:38:20.504097    8402 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0415 23:38:21.287677    8402 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-04-11 10:51:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-04-15 23:38:20.480556394 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0415 23:38:21.287707    8402 machine.go:97] duration metric: took 2.113681675s to provisionDockerMachine
	I0415 23:38:21.287718    8402 client.go:171] duration metric: took 10.237105383s to LocalClient.Create
	I0415 23:38:21.287732    8402 start.go:167] duration metric: took 10.23716788s to libmachine.API.Create "addons-716538"
	I0415 23:38:21.287752    8402 start.go:293] postStartSetup for "addons-716538" (driver="docker")
	I0415 23:38:21.287762    8402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 23:38:21.287835    8402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 23:38:21.287885    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:21.304194    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:21.404589    8402 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 23:38:21.407784    8402 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0415 23:38:21.407822    8402 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0415 23:38:21.407833    8402 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0415 23:38:21.407839    8402 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0415 23:38:21.407851    8402 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-2210/.minikube/addons for local assets ...
	I0415 23:38:21.407925    8402 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-2210/.minikube/files for local assets ...
	I0415 23:38:21.407952    8402 start.go:296] duration metric: took 120.194976ms for postStartSetup
	I0415 23:38:21.408250    8402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-716538
	I0415 23:38:21.424161    8402 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/config.json ...
	I0415 23:38:21.424463    8402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 23:38:21.424513    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:21.439953    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:21.536080    8402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0415 23:38:21.540693    8402 start.go:128] duration metric: took 10.492964896s to createHost
	I0415 23:38:21.540718    8402 start.go:83] releasing machines lock for "addons-716538", held for 10.493097544s
	I0415 23:38:21.540787    8402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-716538
	I0415 23:38:21.555702    8402 ssh_runner.go:195] Run: cat /version.json
	I0415 23:38:21.555755    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:21.555793    8402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 23:38:21.555848    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:21.575680    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:21.585262    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:21.674524    8402 ssh_runner.go:195] Run: systemctl --version
	I0415 23:38:21.786871    8402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0415 23:38:21.791499    8402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0415 23:38:21.818031    8402 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0415 23:38:21.818118    8402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 23:38:21.847296    8402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0415 23:38:21.847327    8402 start.go:494] detecting cgroup driver to use...
	I0415 23:38:21.847360    8402 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0415 23:38:21.847481    8402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 23:38:21.863923    8402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0415 23:38:21.878155    8402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0415 23:38:21.888525    8402 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0415 23:38:21.888609    8402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0415 23:38:21.898912    8402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 23:38:21.909054    8402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0415 23:38:21.918543    8402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0415 23:38:21.928591    8402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 23:38:21.937480    8402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0415 23:38:21.947240    8402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0415 23:38:21.957378    8402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0415 23:38:21.967054    8402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 23:38:21.976165    8402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 23:38:21.984670    8402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:38:22.078261    8402 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0415 23:38:22.171706    8402 start.go:494] detecting cgroup driver to use...
	I0415 23:38:22.171776    8402 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0415 23:38:22.171848    8402 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0415 23:38:22.190915    8402 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0415 23:38:22.191009    8402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0415 23:38:22.207815    8402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 23:38:22.228565    8402 ssh_runner.go:195] Run: which cri-dockerd
	I0415 23:38:22.232832    8402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0415 23:38:22.242630    8402 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0415 23:38:22.263633    8402 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0415 23:38:22.371984    8402 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0415 23:38:22.470209    8402 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0415 23:38:22.470385    8402 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0415 23:38:22.494110    8402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:38:22.580759    8402 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0415 23:38:22.831344    8402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0415 23:38:22.843790    8402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 23:38:22.856545    8402 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0415 23:38:22.953125    8402 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0415 23:38:23.055057    8402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:38:23.146779    8402 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0415 23:38:23.160627    8402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0415 23:38:23.173044    8402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:38:23.262145    8402 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0415 23:38:23.328400    8402 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0415 23:38:23.328486    8402 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0415 23:38:23.332049    8402 start.go:562] Will wait 60s for crictl version
	I0415 23:38:23.332118    8402 ssh_runner.go:195] Run: which crictl
	I0415 23:38:23.335946    8402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 23:38:23.371481    8402 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0415 23:38:23.371567    8402 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 23:38:23.390991    8402 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0415 23:38:23.412882    8402 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0415 23:38:23.413016    8402 cli_runner.go:164] Run: docker network inspect addons-716538 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0415 23:38:23.425431    8402 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0415 23:38:23.428918    8402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:38:23.439420    8402 kubeadm.go:877] updating cluster {Name:addons-716538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-716538 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 23:38:23.439545    8402 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 23:38:23.439606    8402 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 23:38:23.455157    8402 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 23:38:23.455181    8402 docker.go:615] Images already preloaded, skipping extraction
	I0415 23:38:23.455258    8402 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0415 23:38:23.471513    8402 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0415 23:38:23.471533    8402 cache_images.go:84] Images are preloaded, skipping loading
	I0415 23:38:23.471552    8402 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.29.3 docker true true} ...
	I0415 23:38:23.471653    8402 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-716538 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-716538 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 23:38:23.471715    8402 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0415 23:38:23.515693    8402 cni.go:84] Creating CNI manager for ""
	I0415 23:38:23.515727    8402 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 23:38:23.515740    8402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 23:38:23.515758    8402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-716538 NodeName:addons-716538 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 23:38:23.515910    8402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-716538"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 23:38:23.515981    8402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 23:38:23.524617    8402 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 23:38:23.524685    8402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 23:38:23.533325    8402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0415 23:38:23.551415    8402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 23:38:23.569496    8402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0415 23:38:23.587289    8402 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0415 23:38:23.590565    8402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:38:23.601188    8402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:38:23.694184    8402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:38:23.709861    8402 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538 for IP: 192.168.49.2
	I0415 23:38:23.709884    8402 certs.go:194] generating shared ca certs ...
	I0415 23:38:23.709902    8402 certs.go:226] acquiring lock for ca certs: {Name:mk0f2c276f9ccc821c50906b5561fa26a27a6ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:23.710112    8402 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-2210/.minikube/ca.key
	I0415 23:38:24.114530    8402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-2210/.minikube/ca.crt ...
	I0415 23:38:24.114560    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/ca.crt: {Name:mk1dda7fef1888bd7f4145f4ef51c3c525b1d9d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:24.114758    8402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-2210/.minikube/ca.key ...
	I0415 23:38:24.114771    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/ca.key: {Name:mk0562bc8cce84c0d3e244ea7bb7e228b3114c6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:24.114856    8402 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.key
	I0415 23:38:24.402747    8402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.crt ...
	I0415 23:38:24.402777    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.crt: {Name:mk94fedb8dce58d83a258712433b5e988ab20f45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:24.402948    8402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.key ...
	I0415 23:38:24.402958    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.key: {Name:mk2ebc8365d617d48e8abfcdcdc029e7c2a4b452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:24.403030    8402 certs.go:256] generating profile certs ...
	I0415 23:38:24.403089    8402 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.key
	I0415 23:38:24.403105    8402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt with IP's: []
	I0415 23:38:24.981736    8402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt ...
	I0415 23:38:24.981765    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: {Name:mk6181bbf3e837a6321fafe0c2ef8ac782e35aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:24.981945    8402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.key ...
	I0415 23:38:24.981961    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.key: {Name:mk67340941687a7513d3b33e0d6fb02fb7c24ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:24.982036    8402 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.key.8bb5af41
	I0415 23:38:24.982060    8402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.crt.8bb5af41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0415 23:38:25.169445    8402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.crt.8bb5af41 ...
	I0415 23:38:25.169475    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.crt.8bb5af41: {Name:mk095aff621fba68f168e59755498509d383ce6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:25.169660    8402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.key.8bb5af41 ...
	I0415 23:38:25.169676    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.key.8bb5af41: {Name:mkb3043cf2cebd1e21370c42190aa990aff46f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:25.169788    8402 certs.go:381] copying /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.crt.8bb5af41 -> /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.crt
	I0415 23:38:25.169884    8402 certs.go:385] copying /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.key.8bb5af41 -> /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.key
	I0415 23:38:25.169940    8402 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/proxy-client.key
	I0415 23:38:25.169960    8402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/proxy-client.crt with IP's: []
	I0415 23:38:25.510233    8402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/proxy-client.crt ...
	I0415 23:38:25.510265    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/proxy-client.crt: {Name:mk5d57e6cfc1a551e945e45534d017cee0af6039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:25.510440    8402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/proxy-client.key ...
	I0415 23:38:25.510452    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/proxy-client.key: {Name:mk4438165f46eb8482bf776fcfdc4d36004d75ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:25.510632    8402 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca-key.pem (1679 bytes)
	I0415 23:38:25.510673    8402 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem (1078 bytes)
	I0415 23:38:25.510702    8402 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/cert.pem (1123 bytes)
	I0415 23:38:25.510732    8402 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/key.pem (1679 bytes)
	I0415 23:38:25.511339    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 23:38:25.536007    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 23:38:25.561490    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 23:38:25.586483    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0415 23:38:25.610661    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0415 23:38:25.634497    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 23:38:25.658935    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 23:38:25.683114    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 23:38:25.707684    8402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 23:38:25.732486    8402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 23:38:25.750287    8402 ssh_runner.go:195] Run: openssl version
	I0415 23:38:25.755822    8402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 23:38:25.765350    8402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:38:25.768680    8402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:38:25.768774    8402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:38:25.775863    8402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 23:38:25.785254    8402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 23:38:25.788368    8402 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 23:38:25.788418    8402 kubeadm.go:391] StartCluster: {Name:addons-716538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-716538 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:38:25.788537    8402 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0415 23:38:25.802839    8402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 23:38:25.811691    8402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 23:38:25.820373    8402 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0415 23:38:25.820460    8402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 23:38:25.829163    8402 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 23:38:25.829182    8402 kubeadm.go:156] found existing configuration files:
	
	I0415 23:38:25.829232    8402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 23:38:25.838430    8402 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 23:38:25.838514    8402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 23:38:25.847620    8402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 23:38:25.856278    8402 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 23:38:25.856366    8402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 23:38:25.864548    8402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 23:38:25.873275    8402 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 23:38:25.873394    8402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 23:38:25.881894    8402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 23:38:25.891283    8402 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 23:38:25.891385    8402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 23:38:25.899695    8402 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0415 23:38:25.952100    8402 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 23:38:25.952407    8402 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 23:38:26.014594    8402 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0415 23:38:26.014671    8402 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1057-aws
	I0415 23:38:26.014717    8402 kubeadm.go:309] OS: Linux
	I0415 23:38:26.014782    8402 kubeadm.go:309] CGROUPS_CPU: enabled
	I0415 23:38:26.014834    8402 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0415 23:38:26.014888    8402 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0415 23:38:26.014937    8402 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0415 23:38:26.014987    8402 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0415 23:38:26.015040    8402 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0415 23:38:26.015089    8402 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0415 23:38:26.015139    8402 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0415 23:38:26.015188    8402 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0415 23:38:26.094255    8402 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 23:38:26.094377    8402 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 23:38:26.094496    8402 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 23:38:26.334284    8402 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 23:38:26.339008    8402 out.go:204]   - Generating certificates and keys ...
	I0415 23:38:26.339120    8402 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 23:38:26.339221    8402 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 23:38:26.743759    8402 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 23:38:27.253918    8402 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 23:38:27.953933    8402 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 23:38:28.202746    8402 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 23:38:28.534247    8402 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 23:38:28.534582    8402 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-716538 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0415 23:38:29.547468    8402 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 23:38:29.547797    8402 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-716538 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0415 23:38:30.342932    8402 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 23:38:30.635601    8402 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 23:38:30.913908    8402 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 23:38:30.914241    8402 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 23:38:31.382514    8402 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 23:38:32.430653    8402 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 23:38:32.986242    8402 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 23:38:33.309328    8402 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 23:38:33.561854    8402 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 23:38:33.562831    8402 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 23:38:33.569735    8402 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 23:38:33.572558    8402 out.go:204]   - Booting up control plane ...
	I0415 23:38:33.572737    8402 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 23:38:33.572859    8402 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 23:38:33.574166    8402 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 23:38:33.592064    8402 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 23:38:33.592878    8402 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 23:38:33.592937    8402 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 23:38:33.695342    8402 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 23:38:40.697207    8402 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.002262 seconds
	I0415 23:38:40.719700    8402 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 23:38:40.732805    8402 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 23:38:41.259791    8402 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 23:38:41.259990    8402 kubeadm.go:309] [mark-control-plane] Marking the node addons-716538 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 23:38:41.771708    8402 kubeadm.go:309] [bootstrap-token] Using token: 1sfvp9.k04h3zi6nvb9kriw
	I0415 23:38:41.773733    8402 out.go:204]   - Configuring RBAC rules ...
	I0415 23:38:41.773861    8402 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 23:38:41.780551    8402 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 23:38:41.791389    8402 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 23:38:41.795720    8402 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 23:38:41.802389    8402 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 23:38:41.806574    8402 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 23:38:41.820630    8402 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 23:38:42.071190    8402 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 23:38:42.187712    8402 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 23:38:42.188587    8402 kubeadm.go:309] 
	I0415 23:38:42.188670    8402 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 23:38:42.188682    8402 kubeadm.go:309] 
	I0415 23:38:42.188758    8402 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 23:38:42.188768    8402 kubeadm.go:309] 
	I0415 23:38:42.188794    8402 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 23:38:42.188855    8402 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 23:38:42.188908    8402 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 23:38:42.188918    8402 kubeadm.go:309] 
	I0415 23:38:42.189030    8402 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 23:38:42.189045    8402 kubeadm.go:309] 
	I0415 23:38:42.189094    8402 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 23:38:42.189106    8402 kubeadm.go:309] 
	I0415 23:38:42.189163    8402 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 23:38:42.189242    8402 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 23:38:42.189312    8402 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 23:38:42.189320    8402 kubeadm.go:309] 
	I0415 23:38:42.189401    8402 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 23:38:42.189479    8402 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 23:38:42.189488    8402 kubeadm.go:309] 
	I0415 23:38:42.189569    8402 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1sfvp9.k04h3zi6nvb9kriw \
	I0415 23:38:42.189673    8402 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f2f027c52798f9193c8514424c408940f7c6971e09781e9094a26ac3dda81fa1 \
	I0415 23:38:42.189702    8402 kubeadm.go:309] 	--control-plane 
	I0415 23:38:42.189711    8402 kubeadm.go:309] 
	I0415 23:38:42.189812    8402 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 23:38:42.189824    8402 kubeadm.go:309] 
	I0415 23:38:42.189903    8402 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1sfvp9.k04h3zi6nvb9kriw \
	I0415 23:38:42.190006    8402 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f2f027c52798f9193c8514424c408940f7c6971e09781e9094a26ac3dda81fa1 
	I0415 23:38:42.197384    8402 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1057-aws\n", err: exit status 1
	I0415 23:38:42.197507    8402 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 23:38:42.197532    8402 cni.go:84] Creating CNI manager for ""
	I0415 23:38:42.197548    8402 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 23:38:42.200327    8402 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 23:38:42.202970    8402 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 23:38:42.217740    8402 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 23:38:42.260182    8402 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 23:38:42.260309    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:42.260399    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-716538 minikube.k8s.io/updated_at=2024_04_15T23_38_42_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=addons-716538 minikube.k8s.io/primary=true
	I0415 23:38:42.561419    8402 ops.go:34] apiserver oom_adj: -16
	I0415 23:38:42.561521    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:43.061872    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:43.561655    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:44.061990    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:44.561653    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:45.061672    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:45.561628    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:46.061864    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:46.561675    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:47.061596    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:47.561697    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:48.061859    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:48.562622    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:49.061709    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:49.561547    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:50.062551    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:50.562209    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:51.061682    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:51.562020    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:52.061651    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:52.561689    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:53.061634    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:53.561638    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:54.061645    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:54.561650    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:55.062195    8402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:38:55.154150    8402 kubeadm.go:1107] duration metric: took 12.893886458s to wait for elevateKubeSystemPrivileges
	W0415 23:38:55.154183    8402 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 23:38:55.154190    8402 kubeadm.go:393] duration metric: took 29.365778547s to StartCluster
	I0415 23:38:55.154206    8402 settings.go:142] acquiring lock: {Name:mkad41a04993d6fe82f2e16230c6052d1c68b809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:55.154327    8402 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0415 23:38:55.154738    8402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/kubeconfig: {Name:mk2a4b2f2d98970b43b7e481fd26cc76bda92838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:55.154931    8402 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0415 23:38:55.159248    8402 out.go:177] * Verifying Kubernetes components...
	I0415 23:38:55.155062    8402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 23:38:55.155274    8402 config.go:182] Loaded profile config "addons-716538": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 23:38:55.155286    8402 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0415 23:38:55.161461    8402 addons.go:69] Setting yakd=true in profile "addons-716538"
	I0415 23:38:55.161487    8402 addons.go:234] Setting addon yakd=true in "addons-716538"
	I0415 23:38:55.161492    8402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:38:55.161517    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.161602    8402 addons.go:69] Setting ingress-dns=true in profile "addons-716538"
	I0415 23:38:55.161625    8402 addons.go:234] Setting addon ingress-dns=true in "addons-716538"
	I0415 23:38:55.161655    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.162061    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.162073    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.162428    8402 addons.go:69] Setting inspektor-gadget=true in profile "addons-716538"
	I0415 23:38:55.162459    8402 addons.go:234] Setting addon inspektor-gadget=true in "addons-716538"
	I0415 23:38:55.162494    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.162920    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.163283    8402 addons.go:69] Setting cloud-spanner=true in profile "addons-716538"
	I0415 23:38:55.163312    8402 addons.go:234] Setting addon cloud-spanner=true in "addons-716538"
	I0415 23:38:55.163338    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.163746    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.166024    8402 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-716538"
	I0415 23:38:55.166122    8402 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-716538"
	I0415 23:38:55.166155    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.166559    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.166750    8402 addons.go:69] Setting metrics-server=true in profile "addons-716538"
	I0415 23:38:55.166773    8402 addons.go:234] Setting addon metrics-server=true in "addons-716538"
	I0415 23:38:55.166794    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.167169    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.168703    8402 addons.go:69] Setting default-storageclass=true in profile "addons-716538"
	I0415 23:38:55.168739    8402 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-716538"
	I0415 23:38:55.169039    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.170501    8402 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-716538"
	I0415 23:38:55.170537    8402 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-716538"
	I0415 23:38:55.170574    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.170972    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.173644    8402 addons.go:69] Setting gcp-auth=true in profile "addons-716538"
	I0415 23:38:55.173688    8402 mustload.go:65] Loading cluster: addons-716538
	I0415 23:38:55.173874    8402 config.go:182] Loaded profile config "addons-716538": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 23:38:55.174118    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.181588    8402 addons.go:69] Setting registry=true in profile "addons-716538"
	I0415 23:38:55.181642    8402 addons.go:234] Setting addon registry=true in "addons-716538"
	I0415 23:38:55.181678    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.182204    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.195279    8402 addons.go:69] Setting ingress=true in profile "addons-716538"
	I0415 23:38:55.195340    8402 addons.go:234] Setting addon ingress=true in "addons-716538"
	I0415 23:38:55.195386    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.196010    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.215695    8402 addons.go:69] Setting storage-provisioner=true in profile "addons-716538"
	I0415 23:38:55.215753    8402 addons.go:234] Setting addon storage-provisioner=true in "addons-716538"
	I0415 23:38:55.215789    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.216343    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.248472    8402 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-716538"
	I0415 23:38:55.248571    8402 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-716538"
	I0415 23:38:55.248889    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.283584    8402 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0415 23:38:55.288821    8402 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0415 23:38:55.288890    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0415 23:38:55.289003    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.302818    8402 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0415 23:38:55.279436    8402 addons.go:69] Setting volumesnapshots=true in profile "addons-716538"
	I0415 23:38:55.302773    8402 addons.go:234] Setting addon default-storageclass=true in "addons-716538"
	I0415 23:38:55.308726    8402 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0415 23:38:55.308818    8402 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0415 23:38:55.308856    8402 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0415 23:38:55.308855    8402 addons.go:234] Setting addon volumesnapshots=true in "addons-716538"
	I0415 23:38:55.308861    8402 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0415 23:38:55.308874    8402 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0415 23:38:55.308887    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.312540    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.316399    8402 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0415 23:38:55.316422    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0415 23:38:55.316491    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.312874    8402 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0415 23:38:55.312880    8402 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 23:38:55.312884    8402 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0415 23:38:55.312927    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.312945    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0415 23:38:55.320229    8402 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0415 23:38:55.322231    8402 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0415 23:38:55.322669    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.322700    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.325743    8402 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0415 23:38:55.325753    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0415 23:38:55.327764    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0415 23:38:55.342362    8402 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0415 23:38:55.344893    8402 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0415 23:38:55.348137    8402 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0415 23:38:55.353729    8402 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0415 23:38:55.355878    8402 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0415 23:38:55.358284    8402 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0415 23:38:55.360686    8402 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0415 23:38:55.362872    8402 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0415 23:38:55.362896    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0415 23:38:55.362964    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.356695    8402 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0415 23:38:55.342456    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.356703    8402 out.go:177]   - Using image docker.io/registry:2.8.3
	I0415 23:38:55.356772    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.342378    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0415 23:38:55.356831    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.356858    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.378288    8402 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 23:38:55.373654    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.396064    8402 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 23:38:55.399360    8402 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0415 23:38:55.399384    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0415 23:38:55.399450    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.437518    8402 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 23:38:55.437590    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 23:38:55.437765    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.460293    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.479535    8402 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0415 23:38:55.479614    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0415 23:38:55.479698    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.494845    8402 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-716538"
	I0415 23:38:55.494887    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:38:55.495431    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:38:55.502196    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.514895    8402 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0415 23:38:55.519273    8402 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0415 23:38:55.519296    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0415 23:38:55.519371    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.531636    8402 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 23:38:55.531658    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 23:38:55.531715    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.538834    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.539574    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.586008    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.595409    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.612618    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.646256    8402 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0415 23:38:55.645053    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.650351    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.655293    8402 out.go:177]   - Using image docker.io/busybox:stable
	I0415 23:38:55.657175    8402 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0415 23:38:55.657195    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0415 23:38:55.657263    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:38:55.662257    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.662752    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	W0415 23:38:55.664857    8402 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0415 23:38:55.664884    8402 retry.go:31] will retry after 299.114246ms: ssh: handshake failed: EOF
	I0415 23:38:55.687267    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:38:55.842298    8402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 23:38:55.842469    8402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:38:55.985233    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0415 23:38:56.112533    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0415 23:38:56.216188    8402 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0415 23:38:56.216217    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0415 23:38:56.222690    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0415 23:38:56.327729    8402 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0415 23:38:56.327761    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0415 23:38:56.339628    8402 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0415 23:38:56.339655    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0415 23:38:56.357117    8402 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0415 23:38:56.357150    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0415 23:38:56.360559    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0415 23:38:56.380395    8402 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0415 23:38:56.380429    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0415 23:38:56.416828    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0415 23:38:56.492864    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 23:38:56.510878    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 23:38:56.540870    8402 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0415 23:38:56.540898    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0415 23:38:56.669372    8402 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0415 23:38:56.669399    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0415 23:38:56.678078    8402 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0415 23:38:56.678107    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0415 23:38:56.762764    8402 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0415 23:38:56.762792    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0415 23:38:56.808650    8402 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0415 23:38:56.808685    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0415 23:38:56.827358    8402 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0415 23:38:56.827384    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0415 23:38:56.846772    8402 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0415 23:38:56.846801    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0415 23:38:56.968350    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0415 23:38:56.976212    8402 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0415 23:38:56.976242    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0415 23:38:57.009137    8402 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 23:38:57.009165    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0415 23:38:57.041133    8402 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0415 23:38:57.041163    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0415 23:38:57.083394    8402 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0415 23:38:57.083428    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0415 23:38:57.151946    8402 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0415 23:38:57.151983    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0415 23:38:57.186797    8402 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0415 23:38:57.186840    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0415 23:38:57.277785    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 23:38:57.328157    8402 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0415 23:38:57.328183    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0415 23:38:57.351553    8402 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0415 23:38:57.351581    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0415 23:38:57.516857    8402 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0415 23:38:57.516899    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0415 23:38:57.619742    8402 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0415 23:38:57.619787    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0415 23:38:57.636125    8402 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0415 23:38:57.636157    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0415 23:38:57.712455    8402 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 23:38:57.712488    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0415 23:38:57.894637    8402 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0415 23:38:57.894678    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0415 23:38:57.970378    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0415 23:38:58.041574    8402 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0415 23:38:58.041606    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0415 23:38:58.087126    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 23:38:58.240106    8402 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0415 23:38:58.240191    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0415 23:38:58.281976    8402 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0415 23:38:58.282042    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0415 23:38:58.646201    8402 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0415 23:38:58.646227    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0415 23:38:58.710695    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0415 23:38:58.801269    8402 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0415 23:38:58.801295    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0415 23:38:59.039501    8402 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.196986265s)
	I0415 23:38:59.039850    8402 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.197477511s)
	I0415 23:38:59.039871    8402 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0415 23:38:59.040018    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.054756949s)
	I0415 23:38:59.041250    8402 node_ready.go:35] waiting up to 6m0s for node "addons-716538" to be "Ready" ...
	I0415 23:38:59.045854    8402 node_ready.go:49] node "addons-716538" has status "Ready":"True"
	I0415 23:38:59.045886    8402 node_ready.go:38] duration metric: took 4.605952ms for node "addons-716538" to be "Ready" ...
	I0415 23:38:59.045898    8402 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:38:59.060372    8402 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6n4th" in "kube-system" namespace to be "Ready" ...
	I0415 23:38:59.184588    8402 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0415 23:38:59.184620    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0415 23:38:59.302017    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0415 23:38:59.571701    8402 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-716538" context rescaled to 1 replicas
	I0415 23:39:00.180685    8402 pod_ready.go:92] pod "coredns-76f75df574-6n4th" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:00.180711    8402 pod_ready.go:81] duration metric: took 1.120304693s for pod "coredns-76f75df574-6n4th" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.180733    8402 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qstbp" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.224066    8402 pod_ready.go:92] pod "coredns-76f75df574-qstbp" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:00.224106    8402 pod_ready.go:81] duration metric: took 43.35734ms for pod "coredns-76f75df574-qstbp" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.224120    8402 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-716538" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.247637    8402 pod_ready.go:92] pod "etcd-addons-716538" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:00.247683    8402 pod_ready.go:81] duration metric: took 23.553873ms for pod "etcd-addons-716538" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.247698    8402 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-716538" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.265068    8402 pod_ready.go:92] pod "kube-apiserver-addons-716538" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:00.265104    8402 pod_ready.go:81] duration metric: took 17.396924ms for pod "kube-apiserver-addons-716538" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.265118    8402 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-716538" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.273426    8402 pod_ready.go:92] pod "kube-controller-manager-addons-716538" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:00.273467    8402 pod_ready.go:81] duration metric: took 8.340587ms for pod "kube-controller-manager-addons-716538" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.273481    8402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j69k5" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.645249    8402 pod_ready.go:92] pod "kube-proxy-j69k5" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:00.645276    8402 pod_ready.go:81] duration metric: took 371.786712ms for pod "kube-proxy-j69k5" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.645288    8402 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-716538" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:00.745110    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.632517029s)
	I0415 23:39:01.045859    8402 pod_ready.go:92] pod "kube-scheduler-addons-716538" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:01.045887    8402 pod_ready.go:81] duration metric: took 400.591618ms for pod "kube-scheduler-addons-716538" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:01.045899    8402 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-sfnhr" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:01.847703    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.624973383s)
	I0415 23:39:01.847848    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.487264905s)
	I0415 23:39:02.216022    8402 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0415 23:39:02.216136    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:39:02.245561    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:39:03.054295    8402 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-sfnhr" in "kube-system" namespace has status "Ready":"False"
	I0415 23:39:03.217390    8402 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0415 23:39:03.525201    8402 addons.go:234] Setting addon gcp-auth=true in "addons-716538"
	I0415 23:39:03.525302    8402 host.go:66] Checking if "addons-716538" exists ...
	I0415 23:39:03.525790    8402 cli_runner.go:164] Run: docker container inspect addons-716538 --format={{.State.Status}}
	I0415 23:39:03.546938    8402 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0415 23:39:03.546989    8402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-716538
	I0415 23:39:03.570915    8402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/addons-716538/id_rsa Username:docker}
	I0415 23:39:05.552223    8402 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-sfnhr" in "kube-system" namespace has status "Ready":"False"
	I0415 23:39:05.832531    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.415665502s)
	I0415 23:39:05.832614    8402 addons.go:470] Verifying addon ingress=true in "addons-716538"
	I0415 23:39:05.835643    8402 out.go:177] * Verifying ingress addon...
	I0415 23:39:05.832795    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.339902862s)
	I0415 23:39:05.832872    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.32191223s)
	I0415 23:39:05.832898    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.86451634s)
	I0415 23:39:05.832945    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.555133878s)
	I0415 23:39:05.832974    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.862564666s)
	I0415 23:39:05.833080    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.745920855s)
	I0415 23:39:05.833151    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.122413754s)
	I0415 23:39:05.835978    8402 addons.go:470] Verifying addon registry=true in "addons-716538"
	I0415 23:39:05.836148    8402 addons.go:470] Verifying addon metrics-server=true in "addons-716538"
	W0415 23:39:05.836173    8402 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0415 23:39:05.840021    8402 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0415 23:39:05.842065    8402 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-716538 service yakd-dashboard -n yakd-dashboard
	
	I0415 23:39:05.842264    8402 retry.go:31] will retry after 242.203447ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0415 23:39:05.842047    8402 out.go:177] * Verifying registry addon...
	I0415 23:39:05.847844    8402 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0415 23:39:05.854413    8402 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0415 23:39:05.854443    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:05.860558    8402 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0415 23:39:05.860586    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:06.087156    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 23:39:06.346525    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:06.353557    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:06.849018    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:06.857300    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:07.161860    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.859776773s)
	I0415 23:39:07.161897    8402 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-716538"
	I0415 23:39:07.164484    8402 out.go:177] * Verifying csi-hostpath-driver addon...
	I0415 23:39:07.162086    8402 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.615129s)
	I0415 23:39:07.168974    8402 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0415 23:39:07.167572    8402 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0415 23:39:07.173117    8402 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 23:39:07.175382    8402 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0415 23:39:07.175412    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0415 23:39:07.178577    8402 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0415 23:39:07.178602    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:07.279251    8402 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0415 23:39:07.279272    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0415 23:39:07.346928    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:07.358005    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:07.382020    8402 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0415 23:39:07.382047    8402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0415 23:39:07.483469    8402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0415 23:39:07.553165    8402 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-sfnhr" in "kube-system" namespace has status "Ready":"False"
	I0415 23:39:07.677295    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:07.847517    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:07.852502    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:08.178624    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:08.347811    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:08.353526    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:08.677804    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:08.705068    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.617855164s)
	I0415 23:39:08.854993    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:08.865976    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:08.911322    8402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.427810646s)
	I0415 23:39:08.914712    8402 addons.go:470] Verifying addon gcp-auth=true in "addons-716538"
	I0415 23:39:08.916560    8402 out.go:177] * Verifying gcp-auth addon...
	I0415 23:39:08.919487    8402 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0415 23:39:08.967048    8402 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0415 23:39:08.967111    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:09.176386    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:09.347437    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:09.353367    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:09.424300    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:09.676940    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:09.846216    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:09.853098    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:09.924615    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:10.053712    8402 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-sfnhr" in "kube-system" namespace has status "Ready":"False"
	I0415 23:39:10.176743    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:10.347605    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:10.353203    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:10.425180    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:10.677097    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:10.846900    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:10.852482    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:10.923709    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:11.055782    8402 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-sfnhr" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:11.055816    8402 pod_ready.go:81] duration metric: took 10.009908514s for pod "nvidia-device-plugin-daemonset-sfnhr" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:11.055826    8402 pod_ready.go:38] duration metric: took 12.009917467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:39:11.055846    8402 api_server.go:52] waiting for apiserver process to appear ...
	I0415 23:39:11.055952    8402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 23:39:11.086752    8402 api_server.go:72] duration metric: took 15.931791675s to wait for apiserver process to appear ...
	I0415 23:39:11.086779    8402 api_server.go:88] waiting for apiserver healthz status ...
	I0415 23:39:11.086799    8402 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0415 23:39:11.097076    8402 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0415 23:39:11.098991    8402 api_server.go:141] control plane version: v1.29.3
	I0415 23:39:11.099023    8402 api_server.go:131] duration metric: took 12.237774ms to wait for apiserver health ...
	I0415 23:39:11.099036    8402 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 23:39:11.111595    8402 system_pods.go:59] 17 kube-system pods found
	I0415 23:39:11.111635    8402 system_pods.go:61] "coredns-76f75df574-6n4th" [bb59d513-554b-4636-a6e4-158575ccb815] Running
	I0415 23:39:11.111646    8402 system_pods.go:61] "csi-hostpath-attacher-0" [9f5cce1e-4668-493d-a813-f1d2c7ffa655] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0415 23:39:11.111656    8402 system_pods.go:61] "csi-hostpath-resizer-0" [60a2e439-8985-4963-9bf4-c6dbe177db31] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0415 23:39:11.111666    8402 system_pods.go:61] "csi-hostpathplugin-qsngp" [ca3d0f82-4303-4317-b453-108208636846] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0415 23:39:11.111676    8402 system_pods.go:61] "etcd-addons-716538" [bcb65593-a6ca-44de-80bb-78d1e5c727f2] Running
	I0415 23:39:11.111683    8402 system_pods.go:61] "kube-apiserver-addons-716538" [986b17cb-e575-4a35-a8e2-956988a8a282] Running
	I0415 23:39:11.111692    8402 system_pods.go:61] "kube-controller-manager-addons-716538" [d6b20c9a-6081-4c2e-b5fe-b4b3b2e6c06b] Running
	I0415 23:39:11.111700    8402 system_pods.go:61] "kube-ingress-dns-minikube" [b54b52e5-5f4c-4c6d-9fdb-b9b1b6e18e80] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0415 23:39:11.111704    8402 system_pods.go:61] "kube-proxy-j69k5" [5c9fc4cf-34a5-4a3e-96d3-4cbf1fab4955] Running
	I0415 23:39:11.111716    8402 system_pods.go:61] "kube-scheduler-addons-716538" [ce5afdd2-4bf5-4b44-b555-ad75396bace4] Running
	I0415 23:39:11.111722    8402 system_pods.go:61] "metrics-server-75d6c48ddd-hz8lx" [479916e2-3562-4ddc-b0b7-942de45464b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0415 23:39:11.111733    8402 system_pods.go:61] "nvidia-device-plugin-daemonset-sfnhr" [cf47f930-c16f-4bc9-94a8-8abe11547e86] Running
	I0415 23:39:11.111740    8402 system_pods.go:61] "registry-kkk4n" [7b7d970c-ef98-4622-a315-2fff1161f506] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0415 23:39:11.111762    8402 system_pods.go:61] "registry-proxy-kx5gv" [c7a468da-2d16-4683-a615-a02aee9f0e45] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0415 23:39:11.111770    8402 system_pods.go:61] "snapshot-controller-58dbcc7b99-448xj" [0a734fd2-4416-4b04-811a-b4e45a5eec9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 23:39:11.111783    8402 system_pods.go:61] "snapshot-controller-58dbcc7b99-hfbcp" [1fa86be4-3d92-4874-b336-6b1990044e86] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 23:39:11.111788    8402 system_pods.go:61] "storage-provisioner" [c89fc049-e62e-42b3-a807-dc43eba1de65] Running
	I0415 23:39:11.111796    8402 system_pods.go:74] duration metric: took 12.752831ms to wait for pod list to return data ...
	I0415 23:39:11.111809    8402 default_sa.go:34] waiting for default service account to be created ...
	I0415 23:39:11.116547    8402 default_sa.go:45] found service account: "default"
	I0415 23:39:11.116574    8402 default_sa.go:55] duration metric: took 4.757293ms for default service account to be created ...
	I0415 23:39:11.116585    8402 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 23:39:11.131292    8402 system_pods.go:86] 17 kube-system pods found
	I0415 23:39:11.131327    8402 system_pods.go:89] "coredns-76f75df574-6n4th" [bb59d513-554b-4636-a6e4-158575ccb815] Running
	I0415 23:39:11.131339    8402 system_pods.go:89] "csi-hostpath-attacher-0" [9f5cce1e-4668-493d-a813-f1d2c7ffa655] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0415 23:39:11.131347    8402 system_pods.go:89] "csi-hostpath-resizer-0" [60a2e439-8985-4963-9bf4-c6dbe177db31] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0415 23:39:11.131356    8402 system_pods.go:89] "csi-hostpathplugin-qsngp" [ca3d0f82-4303-4317-b453-108208636846] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0415 23:39:11.131367    8402 system_pods.go:89] "etcd-addons-716538" [bcb65593-a6ca-44de-80bb-78d1e5c727f2] Running
	I0415 23:39:11.131422    8402 system_pods.go:89] "kube-apiserver-addons-716538" [986b17cb-e575-4a35-a8e2-956988a8a282] Running
	I0415 23:39:11.131437    8402 system_pods.go:89] "kube-controller-manager-addons-716538" [d6b20c9a-6081-4c2e-b5fe-b4b3b2e6c06b] Running
	I0415 23:39:11.131445    8402 system_pods.go:89] "kube-ingress-dns-minikube" [b54b52e5-5f4c-4c6d-9fdb-b9b1b6e18e80] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0415 23:39:11.131450    8402 system_pods.go:89] "kube-proxy-j69k5" [5c9fc4cf-34a5-4a3e-96d3-4cbf1fab4955] Running
	I0415 23:39:11.131458    8402 system_pods.go:89] "kube-scheduler-addons-716538" [ce5afdd2-4bf5-4b44-b555-ad75396bace4] Running
	I0415 23:39:11.131521    8402 system_pods.go:89] "metrics-server-75d6c48ddd-hz8lx" [479916e2-3562-4ddc-b0b7-942de45464b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0415 23:39:11.131547    8402 system_pods.go:89] "nvidia-device-plugin-daemonset-sfnhr" [cf47f930-c16f-4bc9-94a8-8abe11547e86] Running
	I0415 23:39:11.131555    8402 system_pods.go:89] "registry-kkk4n" [7b7d970c-ef98-4622-a315-2fff1161f506] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0415 23:39:11.131569    8402 system_pods.go:89] "registry-proxy-kx5gv" [c7a468da-2d16-4683-a615-a02aee9f0e45] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0415 23:39:11.131578    8402 system_pods.go:89] "snapshot-controller-58dbcc7b99-448xj" [0a734fd2-4416-4b04-811a-b4e45a5eec9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 23:39:11.131588    8402 system_pods.go:89] "snapshot-controller-58dbcc7b99-hfbcp" [1fa86be4-3d92-4874-b336-6b1990044e86] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 23:39:11.131599    8402 system_pods.go:89] "storage-provisioner" [c89fc049-e62e-42b3-a807-dc43eba1de65] Running
	I0415 23:39:11.131609    8402 system_pods.go:126] duration metric: took 15.017355ms to wait for k8s-apps to be running ...
	I0415 23:39:11.131622    8402 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 23:39:11.131681    8402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:39:11.150668    8402 system_svc.go:56] duration metric: took 19.036834ms WaitForService to wait for kubelet
	I0415 23:39:11.150697    8402 kubeadm.go:576] duration metric: took 15.995742469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 23:39:11.150718    8402 node_conditions.go:102] verifying NodePressure condition ...
	I0415 23:39:11.154948    8402 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0415 23:39:11.154988    8402 node_conditions.go:123] node cpu capacity is 2
	I0415 23:39:11.155000    8402 node_conditions.go:105] duration metric: took 4.276606ms to run NodePressure ...
	I0415 23:39:11.155011    8402 start.go:240] waiting for startup goroutines ...
	I0415 23:39:11.178670    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:11.347654    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:11.353922    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:11.423756    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:11.676989    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:11.846931    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:11.852693    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:11.924066    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:12.183384    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:12.346914    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:12.354847    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:12.424345    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:12.677496    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:12.848004    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:12.853100    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:12.924263    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:13.178595    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:13.347391    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:13.353094    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:13.424163    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:13.678008    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:13.848461    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:13.853516    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:13.923741    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:14.176432    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:14.346746    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:14.352877    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:14.423520    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:14.677364    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:14.847076    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:14.852613    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:14.923334    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:15.180952    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:15.347374    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:15.352735    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:15.423648    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:15.680848    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:15.847068    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:15.853398    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:15.924334    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:16.177277    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:16.349436    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:16.357185    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:16.423810    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:16.676322    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:16.848129    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:16.853330    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:16.924212    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:17.176539    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:17.347036    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:17.352671    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:17.423115    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:17.676833    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:17.847709    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:17.852612    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:17.923664    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:18.176699    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:18.346926    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:18.352561    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:18.422917    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:18.676648    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:18.847463    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:18.853249    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:18.927711    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:19.176831    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:19.347770    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:19.353949    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:19.423410    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:19.678989    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:19.846450    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:19.853935    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:19.923556    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:20.179131    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:20.348306    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:20.353383    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:20.424012    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:20.676788    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:20.848455    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:20.852858    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:20.924139    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:21.177609    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:21.353447    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:21.359326    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:21.425481    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:21.676676    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:21.847521    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:21.853193    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:21.923942    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:22.178131    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:22.347624    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:22.361382    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:22.433578    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:22.678525    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:22.849496    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:22.857852    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:22.924720    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:23.178215    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:23.347509    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:23.355316    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:23.423463    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:23.677993    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:23.850821    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:23.855684    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:23.928971    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:24.177797    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:24.347646    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:24.353214    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:24.433808    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:24.677127    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:24.868891    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:24.870181    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:24.945160    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:25.177733    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:25.347044    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:25.353177    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:25.424086    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:25.678426    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:25.846949    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:25.852540    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:25.923054    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:26.176977    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:26.347053    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:26.353163    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:26.423828    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:26.676440    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:26.847168    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:26.852732    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:26.923739    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:27.177824    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:27.346774    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:27.353142    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:27.425365    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:27.677804    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:27.848766    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:27.853323    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:27.923586    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:28.176958    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:28.346953    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:28.352716    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:28.423960    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:28.680034    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:28.846969    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:28.852734    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:28.929456    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:29.179510    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:29.354386    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:29.358688    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:29.425544    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:29.677575    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:29.847663    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:29.853732    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:29.923717    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:30.197781    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:30.347684    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:30.353057    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:30.424280    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:30.676943    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:30.848761    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:30.853790    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:30.924425    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:31.176767    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:31.347753    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:31.353458    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:31.424457    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:31.677265    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:31.847259    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:31.853382    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:31.923904    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:32.177782    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:32.347270    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:32.353174    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:32.424944    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:32.680642    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:32.847751    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:32.852608    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:32.924403    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:33.177100    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:33.351011    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:33.356622    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:33.424915    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:33.677418    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:33.847129    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:33.852942    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:33.924988    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:34.179551    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:34.347500    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:34.353355    8402 kapi.go:107] duration metric: took 28.505510822s to wait for kubernetes.io/minikube-addons=registry ...
	I0415 23:39:34.424404    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:34.676528    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:34.847406    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:34.923985    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:35.178290    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:35.355610    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:35.424075    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:35.678686    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:35.847989    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:35.923136    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:36.181522    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:36.346800    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:36.423541    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:36.677387    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:36.855491    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:36.924365    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:37.177211    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:37.346806    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:37.427496    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:37.677101    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:37.846647    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:37.923440    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:38.177462    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:38.347308    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:38.424101    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:38.676400    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:38.847309    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:38.928490    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:39.177862    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:39.347047    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:39.424427    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:39.678447    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:39.847184    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:39.924150    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:40.177600    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:40.351023    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:40.424274    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:40.677512    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:40.846946    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:40.937671    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:41.176578    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:41.347532    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:41.423539    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:41.677612    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:41.847783    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:41.923266    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:42.184223    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:42.348140    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:42.424252    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:42.677073    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:42.847032    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:42.929667    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:43.179876    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:43.346977    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:43.423571    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:43.679587    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:43.846505    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:43.925327    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:44.177553    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:44.347363    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:44.424192    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:44.677343    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:44.854287    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:44.924579    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:45.183794    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:45.349311    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:45.424100    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:45.677239    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:45.848488    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:45.924652    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:46.177695    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:46.356296    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:46.426157    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:46.677357    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:46.846943    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:46.924956    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:47.177599    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:47.346734    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:47.423424    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:47.680288    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:47.854042    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:47.924106    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:48.176899    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:48.346705    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:48.423327    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:48.703431    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:48.848120    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:48.926356    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:49.177135    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:49.347319    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:49.425917    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:49.681829    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:49.847396    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:49.924360    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:50.177773    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:50.347473    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:50.426197    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:50.677080    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:50.846626    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:50.923484    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:51.177075    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:51.346560    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:51.423276    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:51.676651    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:51.847303    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:51.924046    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:52.178863    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:52.346367    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:52.424210    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:52.676751    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:52.847951    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:52.923868    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:53.177337    8402 kapi.go:107] duration metric: took 46.009763474s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0415 23:39:53.346454    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:53.424006    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:53.847118    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:53.923718    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:54.346230    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:54.423700    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:54.846326    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:54.924130    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:55.346725    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:55.423441    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:55.847794    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:55.923493    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:56.347342    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:56.423888    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:56.854018    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:56.923882    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:57.347505    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:57.424102    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:57.847679    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:57.924545    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:58.347013    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:58.423632    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:58.847025    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:58.923996    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:59.347155    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:59.423987    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:59.847557    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:59.924178    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:00.365702    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:00.427503    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:00.850287    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:00.924411    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:01.347655    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:01.423456    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:01.847404    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:01.925689    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:02.346926    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:02.423731    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:02.858366    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:02.925048    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:03.348538    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:03.423233    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:03.847872    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:03.923491    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:04.347791    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:04.423728    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:04.846384    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:04.925036    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:05.346852    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:05.423825    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:05.846978    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:05.923686    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:06.347790    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:06.423590    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:06.847348    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:06.924683    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:07.347861    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:07.423744    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:07.846344    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:07.924932    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:08.347501    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:08.423991    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:08.846550    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:08.923588    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:09.347379    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:09.424292    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:09.846524    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:09.923537    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:10.347257    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:10.424223    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:10.846673    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:10.923083    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:11.346766    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:11.423283    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:11.850594    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:11.923156    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:12.347023    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:12.423820    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:12.846330    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:12.923860    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:13.346851    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:13.424066    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:13.847008    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:13.923677    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:14.347050    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:14.423653    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:14.846362    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:14.924273    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:15.347607    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:15.423894    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:15.847644    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:15.928680    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:16.347566    8402 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:16.424011    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:16.846752    8402 kapi.go:107] duration metric: took 1m11.006738704s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0415 23:40:16.923449    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:17.423967    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:17.924046    8402 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:18.424279    8402 kapi.go:107] duration metric: took 1m9.504800971s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0415 23:40:18.426751    8402 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-716538 cluster.
	I0415 23:40:18.428692    8402 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0415 23:40:18.430717    8402 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0415 23:40:18.432779    8402 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0415 23:40:18.434931    8402 addons.go:505] duration metric: took 1m23.279637414s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner-rancher storage-provisioner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0415 23:40:18.434980    8402 start.go:245] waiting for cluster config update ...
	I0415 23:40:18.435001    8402 start.go:254] writing updated cluster config ...
	I0415 23:40:18.436186    8402 ssh_runner.go:195] Run: rm -f paused
	I0415 23:40:18.780007    8402 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 23:40:18.782705    8402 out.go:177] * Done! kubectl is now configured to use "addons-716538" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 15 23:41:08 addons-716538 dockerd[1144]: time="2024-04-15T23:41:08.319035760Z" level=info msg="ignoring event" container=500e663746c83dfc8d85938ae64467e74ed823441a5fe3e16654cf2e01fa0598 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:08 addons-716538 dockerd[1144]: time="2024-04-15T23:41:08.326119760Z" level=info msg="ignoring event" container=c32134d5d9d6dfc99151258b9034fcefe1501b851ce7a6d1785ba34d6d2b9a9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:08 addons-716538 dockerd[1144]: time="2024-04-15T23:41:08.329162750Z" level=info msg="ignoring event" container=6c976ba65962ea922b84d8e41a0731ae422c1534832e2e9001430c12caf550d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:08 addons-716538 dockerd[1144]: time="2024-04-15T23:41:08.365784406Z" level=info msg="ignoring event" container=0a02b051dbaa6511e41585a006a387fa6265a54f6d39a5bd31911c9349f461ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:08 addons-716538 dockerd[1144]: time="2024-04-15T23:41:08.365843965Z" level=info msg="ignoring event" container=c0d0824556785420f2a57697882fbd384a8b4870ae5e7a3255ef26f8e9f7d808 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:08 addons-716538 dockerd[1144]: time="2024-04-15T23:41:08.380525973Z" level=info msg="ignoring event" container=622c51ea15dbf698e7ba36bd0a78067afbf7b3d739916adf0448cabce6f56816 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:08 addons-716538 dockerd[1144]: time="2024-04-15T23:41:08.380578829Z" level=info msg="ignoring event" container=2eadbc9a9d71692fee5cd433ba73ac22b2a6f8320ee2e20017c55022625f8931 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:08 addons-716538 dockerd[1144]: time="2024-04-15T23:41:08.492828314Z" level=info msg="ignoring event" container=9780e906eaa071cb90198a3d61ce7bb86e604173ec196ca7284de45072c660f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:08 addons-716538 dockerd[1144]: time="2024-04-15T23:41:08.589641534Z" level=info msg="ignoring event" container=9f8bb620b461d7498b48f9b2e22970793a4a8edc3deaa4a2c8ffdbfac4b76240 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:08 addons-716538 dockerd[1144]: time="2024-04-15T23:41:08.637157516Z" level=info msg="ignoring event" container=358e7a213ed16eeeb1d345a7ba606533a89fcfae3e87b8506c222666e4a778b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:14 addons-716538 dockerd[1144]: time="2024-04-15T23:41:14.824606045Z" level=info msg="ignoring event" container=10abc9520b45fc92dcc6df4ba7900463961e42cd12dc67df03b20455b83d7225 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:14 addons-716538 dockerd[1144]: time="2024-04-15T23:41:14.827942027Z" level=info msg="ignoring event" container=8cbb9be93f75dcdaf0b5d2346ec90f15703f9eb61fa70d08b96a169d93495946 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:14 addons-716538 dockerd[1144]: time="2024-04-15T23:41:14.995827941Z" level=info msg="ignoring event" container=0514a57c26387219c5e2c7a6381aaa5707f54f8b8e5ef888ac529091c6ff5efb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:15 addons-716538 dockerd[1144]: time="2024-04-15T23:41:15.093858638Z" level=info msg="ignoring event" container=40eaf04c3c5807f3d5ab00dbcdb3fc4e0a3328d8fc892bedd4ab0fc6c40527ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:19 addons-716538 dockerd[1144]: time="2024-04-15T23:41:19.930924012Z" level=info msg="ignoring event" container=9ea92e7d3c45407c36b954a5c2ae50d2a1036bd7ce114aade85d342d7cb5bf8b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:23 addons-716538 dockerd[1144]: time="2024-04-15T23:41:23.972282273Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=ff11bf5de6785394968e4ff1a793ae6a58ecd1ddd57e518fb1f928a0d6f0b4f2 spanID=be9c4a66da169ba5 traceID=3d40588d47f7106a37af1bddf286ff84
	Apr 15 23:41:24 addons-716538 dockerd[1144]: time="2024-04-15T23:41:24.019237414Z" level=info msg="ignoring event" container=ff11bf5de6785394968e4ff1a793ae6a58ecd1ddd57e518fb1f928a0d6f0b4f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:24 addons-716538 dockerd[1144]: time="2024-04-15T23:41:24.147586160Z" level=info msg="ignoring event" container=597a6a6974a86bb15470db7fe7515ff3ef2f5c8c4951aa1545a67f97e2323380 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:25 addons-716538 dockerd[1144]: time="2024-04-15T23:41:25.381015515Z" level=info msg="ignoring event" container=16767d2d1350b178805cb4c0c4e8741a2a025fb8b280852f39152c1f3928ef00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:26 addons-716538 dockerd[1144]: time="2024-04-15T23:41:26.497872232Z" level=info msg="ignoring event" container=7d3908fab600a83ba6c2e28a5abe3a119d325772527c01f1d11bc3dce9025131 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:26 addons-716538 dockerd[1144]: time="2024-04-15T23:41:26.720098701Z" level=info msg="ignoring event" container=3007e4eff6d7b92dae2f500b791e5c6f80eddc6c6fe9dc91cd16512d295e2ed1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 15 23:41:27 addons-716538 cri-dockerd[1356]: time="2024-04-15T23:41:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bfa021fbb1cbccc525ced179834f647ca2f7f458d2f608c7fc0f28960549939a/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Apr 15 23:41:27 addons-716538 dockerd[1144]: time="2024-04-15T23:41:27.537531762Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" spanID=d702548dd72c6baa traceID=a82217cd90393b1bcd9b4b57a5034ab4
	Apr 15 23:41:28 addons-716538 cri-dockerd[1356]: time="2024-04-15T23:41:28Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Apr 15 23:41:28 addons-716538 dockerd[1144]: time="2024-04-15T23:41:28.603573750Z" level=info msg="ignoring event" container=bdbee55ef95e73c117fc52c4107cc7fdb74d7185b90d8ef86ad8cf8dc0f4ce3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bdbee55ef95e7       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              1 second ago         Exited              helper-pod                0                   bfa021fbb1cbc       helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b
	16767d2d1350b       dd1b12fcb6097                                                                                                                4 seconds ago        Exited              hello-world-app           2                   86526c8224995       hello-world-app-5d77478584-5tbgr
	252c5e4c56486       nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                                                33 seconds ago       Running             nginx                     0                   6cc6eb3b82b0e       nginx
	d97cea745c5c0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 About a minute ago   Running             gcp-auth                  0                   a2dcfa96cbba9       gcp-auth-7d69788767-bptln
	17b720f2ab9f9       1a024e390dd05                                                                                                                About a minute ago   Exited              patch                     1                   727f89e48b1df       ingress-nginx-admission-patch-pldtp
	b46a93971584e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334   About a minute ago   Exited              create                    0                   fbfd896ac6ab3       ingress-nginx-admission-create-6zl2b
	a32cdf679ad4c       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        About a minute ago   Running             yakd                      0                   7f66d30e69b91       yakd-dashboard-9947fc6bf-q6l7s
	44a3c2e320f96       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner    0                   cd44c13e389f4       local-path-provisioner-78b46b4d5c-4ws8p
	d2555737ca580       gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50               2 minutes ago        Running             cloud-spanner-emulator    0                   fbf90e6c0b262       cloud-spanner-emulator-5446596998-gmw2t
	804e20d36daa4       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner       0                   057e8ceefac11       storage-provisioner
	db7b6a66e8063       2437cf7621777                                                                                                                2 minutes ago        Running             coredns                   0                   3a7c32307dc00       coredns-76f75df574-6n4th
	329f78f44f4e9       0e9b4a0d1e86d                                                                                                                2 minutes ago        Running             kube-proxy                0                   8be9ff4bc2e13       kube-proxy-j69k5
	f8a534a87f851       4b51f9f6bc9b9                                                                                                                2 minutes ago        Running             kube-scheduler            0                   538dbe56a37cf       kube-scheduler-addons-716538
	e256642199fb9       121d70d9a3805                                                                                                                2 minutes ago        Running             kube-controller-manager   0                   4e6e8de6f52bd       kube-controller-manager-addons-716538
	e1bba4b464e16       014faa467e297                                                                                                                2 minutes ago        Running             etcd                      0                   3ada14aa17860       etcd-addons-716538
	cdfd7bafd574d       2581114f5709d                                                                                                                2 minutes ago        Running             kube-apiserver            0                   9ecc364db9d26       kube-apiserver-addons-716538
	
	
	==> coredns [db7b6a66e806] <==
	[INFO] 10.244.0.20:57656 - 42051 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055383s
	[INFO] 10.244.0.20:57656 - 5375 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000203697s
	[INFO] 10.244.0.20:44870 - 29090 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065172s
	[INFO] 10.244.0.20:44870 - 64981 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000298792s
	[INFO] 10.244.0.20:57656 - 18702 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000308826s
	[INFO] 10.244.0.20:36886 - 53172 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000310115s
	[INFO] 10.244.0.20:44498 - 19020 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00016766s
	[INFO] 10.244.0.20:57656 - 24819 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003460385s
	[INFO] 10.244.0.20:44870 - 6363 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004343411s
	[INFO] 10.244.0.20:44870 - 28395 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004075716s
	[INFO] 10.244.0.20:44498 - 20273 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000409214s
	[INFO] 10.244.0.20:36886 - 5283 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.001384554s
	[INFO] 10.244.0.20:57656 - 36310 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001338253s
	[INFO] 10.244.0.20:57656 - 25379 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000218465s
	[INFO] 10.244.0.20:44870 - 53589 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00007021s
	[INFO] 10.244.0.20:44498 - 64561 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041001s
	[INFO] 10.244.0.20:36886 - 5421 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048853s
	[INFO] 10.244.0.20:44498 - 54541 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000125265s
	[INFO] 10.244.0.20:44498 - 24848 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001925319s
	[INFO] 10.244.0.20:36886 - 23938 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000111637s
	[INFO] 10.244.0.20:44498 - 44277 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002645657s
	[INFO] 10.244.0.20:36886 - 18717 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002065295s
	[INFO] 10.244.0.20:44498 - 59078 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000170647s
	[INFO] 10.244.0.20:36886 - 28971 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001597227s
	[INFO] 10.244.0.20:36886 - 1916 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096539s
	
	
	==> describe nodes <==
	Name:               addons-716538
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-716538
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=addons-716538
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T23_38_42_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-716538
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:38:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-716538
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 23:41:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 23:41:15 +0000   Mon, 15 Apr 2024 23:38:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 23:41:15 +0000   Mon, 15 Apr 2024 23:38:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 23:41:15 +0000   Mon, 15 Apr 2024 23:38:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 23:41:15 +0000   Mon, 15 Apr 2024 23:38:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-716538
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 a97bee6118dc4281b67727ee0d82eddd
	  System UUID:                d9990504-75a5-4322-8e09-77bddd702087
	  Boot ID:                    ed177bf7-a11f-466b-8935-e2b8479e05ab
	  Kernel Version:             5.15.0-1057-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-gmw2t                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  default                     hello-world-app-5d77478584-5tbgr                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-7d69788767-bptln                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 coredns-76f75df574-6n4th                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m34s
	  kube-system                 etcd-addons-716538                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m47s
	  kube-system                 kube-apiserver-addons-716538                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 kube-controller-manager-addons-716538                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 kube-proxy-j69k5                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-scheduler-addons-716538                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  local-path-storage          helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  local-path-storage          local-path-provisioner-78b46b4d5c-4ws8p                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-q6l7s                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (3%!)(MISSING)  426Mi (5%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m55s (x8 over 2m55s)  kubelet          Node addons-716538 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m55s (x8 over 2m55s)  kubelet          Node addons-716538 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s (x7 over 2m55s)  kubelet          Node addons-716538 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m47s                  kubelet          Node addons-716538 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m47s                  kubelet          Node addons-716538 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m47s                  kubelet          Node addons-716538 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m47s                  kubelet          Node addons-716538 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m47s                  kubelet          Node addons-716538 status is now: NodeReady
	  Normal  Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m35s                  node-controller  Node addons-716538 event: Registered Node addons-716538 in Controller
	
	
	==> dmesg <==
	[Apr15 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.022137] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497046] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002637] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015084] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004673] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003593] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.651081] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.501800] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [e1bba4b464e1] <==
	{"level":"info","ts":"2024-04-15T23:38:35.705891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-04-15T23:38:35.705993Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-04-15T23:38:35.726603Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-15T23:38:35.726846Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-15T23:38:35.72697Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-15T23:38:35.72828Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-15T23:38:35.738591Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-15T23:38:36.279254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-15T23:38:36.279488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-15T23:38:36.279597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-04-15T23:38:36.279782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-04-15T23:38:36.279874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-15T23:38:36.279977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-04-15T23:38:36.280083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-15T23:38:36.285513Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-716538 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-15T23:38:36.285711Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T23:38:36.286103Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T23:38:36.286613Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-15T23:38:36.286979Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-15T23:38:36.294139Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-15T23:38:36.293759Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-15T23:38:36.299269Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T23:38:36.299522Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-15T23:38:36.301528Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-15T23:38:36.343251Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [d97cea745c5c] <==
	2024/04/15 23:40:18 GCP Auth Webhook started!
	2024/04/15 23:40:30 Ready to marshal response ...
	2024/04/15 23:40:30 Ready to write response ...
	2024/04/15 23:40:36 Ready to marshal response ...
	2024/04/15 23:40:36 Ready to write response ...
	2024/04/15 23:40:54 Ready to marshal response ...
	2024/04/15 23:40:54 Ready to write response ...
	2024/04/15 23:40:59 Ready to marshal response ...
	2024/04/15 23:40:59 Ready to write response ...
	2024/04/15 23:41:03 Ready to marshal response ...
	2024/04/15 23:41:03 Ready to write response ...
	2024/04/15 23:41:26 Ready to marshal response ...
	2024/04/15 23:41:26 Ready to write response ...
	2024/04/15 23:41:26 Ready to marshal response ...
	2024/04/15 23:41:26 Ready to write response ...
	
	
	==> kernel <==
	 23:41:29 up 23 min,  0 users,  load average: 2.10, 1.49, 0.66
	Linux addons-716538 5.15.0-1057-aws #63~20.04.1-Ubuntu SMP Mon Mar 25 10:29:14 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [cdfd7bafd574] <==
	W0415 23:39:32.563782       1 handler_proxy.go:93] no RequestInfo found in the context
	E0415 23:39:32.563845       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0415 23:39:32.565194       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.128.162:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.128.162:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.128.162:443: connect: connection refused
	E0415 23:39:32.570600       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.128.162:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.128.162:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.128.162:443: connect: connection refused
	I0415 23:39:32.678217       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0415 23:40:48.319002       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0415 23:40:48.524184       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	W0415 23:40:49.361521       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0415 23:40:53.969310       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0415 23:40:54.292915       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.212.104"}
	I0415 23:41:04.063089       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.57.189"}
	I0415 23:41:14.589430       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 23:41:14.589532       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 23:41:14.624735       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 23:41:14.624791       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 23:41:14.654412       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 23:41:14.654730       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 23:41:14.673153       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 23:41:14.673397       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 23:41:14.701712       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 23:41:14.702930       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0415 23:41:15.654875       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0415 23:41:15.702489       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0415 23:41:15.710995       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [e256642199fb] <==
	W0415 23:41:18.708932       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:41:18.708971       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 23:41:18.711476       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:41:18.711511       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 23:41:19.004031       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:41:19.004070       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0415 23:41:20.922116       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0415 23:41:20.931453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="4.644µs"
	I0415 23:41:20.937023       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0415 23:41:22.486496       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:41:22.486529       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 23:41:22.772262       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:41:22.772298       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 23:41:22.776870       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:41:22.776910       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0415 23:41:25.014949       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0415 23:41:25.014993       1 shared_informer.go:318] Caches are synced for resource quota
	I0415 23:41:25.388425       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0415 23:41:25.388468       1 shared_informer.go:318] Caches are synced for garbage collector
	I0415 23:41:26.493449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="64.426µs"
	I0415 23:41:26.592211       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0415 23:41:26.845641       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0415 23:41:27.500969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.504µs"
	W0415 23:41:28.739224       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:41:28.739259       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [329f78f44f4e] <==
	I0415 23:38:56.462243       1 server_others.go:72] "Using iptables proxy"
	I0415 23:38:56.540607       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0415 23:38:56.572639       1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0415 23:38:56.572667       1 server_others.go:168] "Using iptables Proxier"
	I0415 23:38:56.574450       1 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0415 23:38:56.574465       1 server_others.go:529] "Defaulting to no-op detect-local"
	I0415 23:38:56.574503       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 23:38:56.574700       1 server.go:865] "Version info" version="v1.29.3"
	I0415 23:38:56.574710       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 23:38:56.592621       1 config.go:188] "Starting service config controller"
	I0415 23:38:56.592654       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 23:38:56.592674       1 config.go:97] "Starting endpoint slice config controller"
	I0415 23:38:56.592678       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 23:38:56.593195       1 config.go:315] "Starting node config controller"
	I0415 23:38:56.593210       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 23:38:56.692731       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 23:38:56.692820       1 shared_informer.go:318] Caches are synced for service config
	I0415 23:38:56.694786       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f8a534a87f85] <==
	W0415 23:38:39.040257       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 23:38:39.040279       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 23:38:39.040352       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 23:38:39.040361       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 23:38:39.040393       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 23:38:39.040408       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 23:38:39.040423       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 23:38:39.040434       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0415 23:38:39.050535       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 23:38:39.050898       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 23:38:39.050855       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 23:38:39.051435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 23:38:39.910896       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 23:38:39.910931       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 23:38:40.048827       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 23:38:40.049119       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 23:38:40.077616       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0415 23:38:40.077881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0415 23:38:40.088749       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 23:38:40.089029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 23:38:40.109570       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 23:38:40.109620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 23:38:40.183117       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 23:38:40.183158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0415 23:38:40.598082       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.898788    2205 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3d0f82-4303-4317-b453-108208636846" containerName="node-driver-registrar"
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.898855    2205 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54b52e5-5f4c-4c6d-9fdb-b9b1b6e18e80" containerName="minikube-ingress-dns"
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.898912    2205 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bc607eb-e557-457c-a6ad-7aa35cfe7edb" containerName="controller"
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.898986    2205 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ce8cbc-cbb0-4d2b-b2c9-4fa03144580c" containerName="task-pv-container"
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.899055    2205 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f5cce1e-4668-493d-a813-f1d2c7ffa655" containerName="csi-attacher"
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.899127    2205 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3d0f82-4303-4317-b453-108208636846" containerName="hostpath"
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.899184    2205 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf47f930-c16f-4bc9-94a8-8abe11547e86" containerName="nvidia-device-plugin-ctr"
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.899315    2205 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3d0f82-4303-4317-b453-108208636846" containerName="csi-snapshotter"
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.944834    2205 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/cf47f930-c16f-4bc9-94a8-8abe11547e86-device-plugin\") pod \"cf47f930-c16f-4bc9-94a8-8abe11547e86\" (UID: \"cf47f930-c16f-4bc9-94a8-8abe11547e86\") "
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.944902    2205 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rbq5\" (UniqueName: \"kubernetes.io/projected/cf47f930-c16f-4bc9-94a8-8abe11547e86-kube-api-access-9rbq5\") pod \"cf47f930-c16f-4bc9-94a8-8abe11547e86\" (UID: \"cf47f930-c16f-4bc9-94a8-8abe11547e86\") "
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.945081    2205 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf47f930-c16f-4bc9-94a8-8abe11547e86-device-plugin" (OuterVolumeSpecName: "device-plugin") pod "cf47f930-c16f-4bc9-94a8-8abe11547e86" (UID: "cf47f930-c16f-4bc9-94a8-8abe11547e86"). InnerVolumeSpecName "device-plugin". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 15 23:41:26 addons-716538 kubelet[2205]: I0415 23:41:26.949246    2205 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf47f930-c16f-4bc9-94a8-8abe11547e86-kube-api-access-9rbq5" (OuterVolumeSpecName: "kube-api-access-9rbq5") pod "cf47f930-c16f-4bc9-94a8-8abe11547e86" (UID: "cf47f930-c16f-4bc9-94a8-8abe11547e86"). InnerVolumeSpecName "kube-api-access-9rbq5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 23:41:27 addons-716538 kubelet[2205]: I0415 23:41:27.045684    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a11a536d-92ea-4c97-bbb4-864729141827-script\") pod \"helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b\" (UID: \"a11a536d-92ea-4c97-bbb4-864729141827\") " pod="local-path-storage/helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b"
	Apr 15 23:41:27 addons-716538 kubelet[2205]: I0415 23:41:27.045742    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a11a536d-92ea-4c97-bbb4-864729141827-data\") pod \"helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b\" (UID: \"a11a536d-92ea-4c97-bbb4-864729141827\") " pod="local-path-storage/helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b"
	Apr 15 23:41:27 addons-716538 kubelet[2205]: I0415 23:41:27.045770    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a11a536d-92ea-4c97-bbb4-864729141827-gcp-creds\") pod \"helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b\" (UID: \"a11a536d-92ea-4c97-bbb4-864729141827\") " pod="local-path-storage/helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b"
	Apr 15 23:41:27 addons-716538 kubelet[2205]: I0415 23:41:27.045802    2205 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nw76\" (UniqueName: \"kubernetes.io/projected/a11a536d-92ea-4c97-bbb4-864729141827-kube-api-access-7nw76\") pod \"helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b\" (UID: \"a11a536d-92ea-4c97-bbb4-864729141827\") " pod="local-path-storage/helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b"
	Apr 15 23:41:27 addons-716538 kubelet[2205]: I0415 23:41:27.045830    2205 reconciler_common.go:300] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/cf47f930-c16f-4bc9-94a8-8abe11547e86-device-plugin\") on node \"addons-716538\" DevicePath \"\""
	Apr 15 23:41:27 addons-716538 kubelet[2205]: I0415 23:41:27.045844    2205 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9rbq5\" (UniqueName: \"kubernetes.io/projected/cf47f930-c16f-4bc9-94a8-8abe11547e86-kube-api-access-9rbq5\") on node \"addons-716538\" DevicePath \"\""
	Apr 15 23:41:27 addons-716538 kubelet[2205]: I0415 23:41:27.489097    2205 scope.go:117] "RemoveContainer" containerID="16767d2d1350b178805cb4c0c4e8741a2a025fb8b280852f39152c1f3928ef00"
	Apr 15 23:41:27 addons-716538 kubelet[2205]: E0415 23:41:27.489376    2205 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-5tbgr_default(2a5cb3e3-f8a7-4dec-9333-50d13294e6d0)\"" pod="default/hello-world-app-5d77478584-5tbgr" podUID="2a5cb3e3-f8a7-4dec-9333-50d13294e6d0"
	Apr 15 23:41:27 addons-716538 kubelet[2205]: I0415 23:41:27.515819    2205 scope.go:117] "RemoveContainer" containerID="7d3908fab600a83ba6c2e28a5abe3a119d325772527c01f1d11bc3dce9025131"
	Apr 15 23:41:27 addons-716538 kubelet[2205]: I0415 23:41:27.560585    2205 scope.go:117] "RemoveContainer" containerID="7d3908fab600a83ba6c2e28a5abe3a119d325772527c01f1d11bc3dce9025131"
	Apr 15 23:41:27 addons-716538 kubelet[2205]: E0415 23:41:27.564749    2205 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7d3908fab600a83ba6c2e28a5abe3a119d325772527c01f1d11bc3dce9025131" containerID="7d3908fab600a83ba6c2e28a5abe3a119d325772527c01f1d11bc3dce9025131"
	Apr 15 23:41:27 addons-716538 kubelet[2205]: I0415 23:41:27.564811    2205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7d3908fab600a83ba6c2e28a5abe3a119d325772527c01f1d11bc3dce9025131"} err="failed to get container status \"7d3908fab600a83ba6c2e28a5abe3a119d325772527c01f1d11bc3dce9025131\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7d3908fab600a83ba6c2e28a5abe3a119d325772527c01f1d11bc3dce9025131"
	Apr 15 23:41:28 addons-716538 kubelet[2205]: I0415 23:41:28.244903    2205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf47f930-c16f-4bc9-94a8-8abe11547e86" path="/var/lib/kubelet/pods/cf47f930-c16f-4bc9-94a8-8abe11547e86/volumes"
	
	
	==> storage-provisioner [804e20d36daa] <==
	I0415 23:39:01.902150       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 23:39:01.920078       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 23:39:01.920130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 23:39:01.936773       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 23:39:01.936955       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-716538_255e6620-c98d-4cad-aa2b-1505971ad2f9!
	I0415 23:39:01.937870       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"677b27fa-6447-48e8-ae3a-d64ad4805a2c", APIVersion:"v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-716538_255e6620-c98d-4cad-aa2b-1505971ad2f9 became leader
	I0415 23:39:02.045603       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-716538_255e6620-c98d-4cad-aa2b-1505971ad2f9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-716538 -n addons-716538
helpers_test.go:261: (dbg) Run:  kubectl --context addons-716538 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: test-local-path helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-716538 describe pod test-local-path helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-716538 describe pod test-local-path helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b: exit status 1 (104.782999ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bjh9w (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-bjh9w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-716538 describe pod test-local-path helper-pod-create-pvc-c12d040b-6548-4227-bc97-a3872884195b: exit status 1
--- FAIL: TestAddons/parallel/Ingress (36.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (371.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-014065 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0416 00:42:50.201499    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:42:57.645341    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:42:57.650579    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:42:57.660820    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:42:57.681068    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:42:57.721309    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:42:57.801706    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:42:57.962423    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:42:58.283478    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:42:58.924112    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:43:00.204514    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:43:02.765103    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:43:03.303573    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:43:03.607458    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:43:07.886133    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:43:18.126951    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:43:31.298240    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:43:38.607505    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:43:51.585565    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:43:57.942883    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:43:57.948326    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:43:57.958595    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:43:57.978864    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:43:58.019185    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:43:58.099455    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:43:58.259945    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:43:58.580609    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:43:58.957315    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:43:59.220757    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:44:00.514738    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:44:03.078131    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:44:08.199122    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:44:18.439338    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:44:19.567754    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:44:25.224513    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:44:26.643357    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:44:38.919987    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:44:48.451838    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:45:05.392504    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:05.397765    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:05.408019    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:05.428392    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:05.468726    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:05.549017    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:05.709392    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:06.029961    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:06.358021    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:45:06.670620    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:07.951512    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:10.512575    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:15.633059    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:18.834498    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0416 00:45:19.880406    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:45:25.873620    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:45:34.041729    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:45:41.488343    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:45:46.353844    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:46:07.742965    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:46:20.762198    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/auto-128493/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-014065 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m8.655319296s)

                                                
                                                
-- stdout --
	* [old-k8s-version-014065] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-014065" primary control-plane node in "old-k8s-version-014065" cluster
	* Pulling base image v0.0.43-1713215244-18647 ...
	* Restarting existing docker container for "old-k8s-version-014065" ...
	* Preparing Kubernetes v1.20.0 on Docker 26.0.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-014065 addons enable metrics-server
	
	* Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:42:40.650687  369017 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:42:40.650860  369017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:42:40.650892  369017 out.go:304] Setting ErrFile to fd 2...
	I0416 00:42:40.650912  369017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:42:40.651179  369017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0416 00:42:40.651617  369017 out.go:298] Setting JSON to false
	I0416 00:42:40.652782  369017 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5096,"bootTime":1713223065,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0416 00:42:40.652882  369017 start.go:139] virtualization:  
	I0416 00:42:40.657149  369017 out.go:177] * [old-k8s-version-014065] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0416 00:42:40.659310  369017 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:42:40.659393  369017 notify.go:220] Checking for updates...
	I0416 00:42:40.664118  369017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:42:40.666095  369017 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0416 00:42:40.668274  369017 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	I0416 00:42:40.670450  369017 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0416 00:42:40.672695  369017 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:42:40.675191  369017 config.go:182] Loaded profile config "old-k8s-version-014065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0416 00:42:40.678294  369017 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0416 00:42:40.680735  369017 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:42:40.704616  369017 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0416 00:42:40.704740  369017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0416 00:42:40.784256  369017 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-16 00:42:40.771569696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0416 00:42:40.784375  369017 docker.go:295] overlay module found
	I0416 00:42:40.787819  369017 out.go:177] * Using the docker driver based on existing profile
	I0416 00:42:40.789398  369017 start.go:297] selected driver: docker
	I0416 00:42:40.789418  369017 start.go:901] validating driver "docker" against &{Name:old-k8s-version-014065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014065 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:42:40.789550  369017 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:42:40.790224  369017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0416 00:42:40.858742  369017 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-16 00:42:40.838986442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0416 00:42:40.859116  369017 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:42:40.859187  369017 cni.go:84] Creating CNI manager for ""
	I0416 00:42:40.859307  369017 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0416 00:42:40.859362  369017 start.go:340] cluster config:
	{Name:old-k8s-version-014065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014065 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:42:40.862959  369017 out.go:177] * Starting "old-k8s-version-014065" primary control-plane node in "old-k8s-version-014065" cluster
	I0416 00:42:40.864551  369017 cache.go:121] Beginning downloading kic base image for docker with docker
	I0416 00:42:40.866528  369017 out.go:177] * Pulling base image v0.0.43-1713215244-18647 ...
	I0416 00:42:40.868467  369017 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0416 00:42:40.868519  369017 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0416 00:42:40.868539  369017 cache.go:56] Caching tarball of preloaded images
	I0416 00:42:40.868571  369017 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0416 00:42:40.868617  369017 preload.go:173] Found /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0416 00:42:40.868628  369017 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0416 00:42:40.868740  369017 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/config.json ...
	I0416 00:42:40.883252  369017 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon, skipping pull
	I0416 00:42:40.883280  369017 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in daemon, skipping load
	I0416 00:42:40.883301  369017 cache.go:194] Successfully downloaded all kic artifacts
	I0416 00:42:40.883337  369017 start.go:360] acquireMachinesLock for old-k8s-version-014065: {Name:mk76975395468af310769b039e692fcb88b5961e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:42:40.883407  369017 start.go:364] duration metric: took 43.027µs to acquireMachinesLock for "old-k8s-version-014065"
	I0416 00:42:40.883433  369017 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:42:40.883442  369017 fix.go:54] fixHost starting: 
	I0416 00:42:40.883714  369017 cli_runner.go:164] Run: docker container inspect old-k8s-version-014065 --format={{.State.Status}}
	I0416 00:42:40.900618  369017 fix.go:112] recreateIfNeeded on old-k8s-version-014065: state=Stopped err=<nil>
	W0416 00:42:40.900660  369017 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:42:40.903302  369017 out.go:177] * Restarting existing docker container for "old-k8s-version-014065" ...
	I0416 00:42:40.905375  369017 cli_runner.go:164] Run: docker start old-k8s-version-014065
	I0416 00:42:41.230339  369017 cli_runner.go:164] Run: docker container inspect old-k8s-version-014065 --format={{.State.Status}}
	I0416 00:42:41.254892  369017 kic.go:430] container "old-k8s-version-014065" state is running.
	I0416 00:42:41.255408  369017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-014065
	I0416 00:42:41.277517  369017 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/config.json ...
	I0416 00:42:41.277898  369017 machine.go:94] provisionDockerMachine start ...
	I0416 00:42:41.278073  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:41.297808  369017 main.go:141] libmachine: Using SSH client type: native
	I0416 00:42:41.298129  369017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0416 00:42:41.298140  369017 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:42:41.298884  369017 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0416 00:42:44.446753  369017 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-014065
	
	I0416 00:42:44.446777  369017 ubuntu.go:169] provisioning hostname "old-k8s-version-014065"
	I0416 00:42:44.446839  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:44.462291  369017 main.go:141] libmachine: Using SSH client type: native
	I0416 00:42:44.462542  369017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0416 00:42:44.462561  369017 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-014065 && echo "old-k8s-version-014065" | sudo tee /etc/hostname
	I0416 00:42:44.619722  369017 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-014065
	
	I0416 00:42:44.619809  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:44.637431  369017 main.go:141] libmachine: Using SSH client type: native
	I0416 00:42:44.637691  369017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0416 00:42:44.637712  369017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-014065' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-014065/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-014065' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:42:44.783431  369017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:42:44.783523  369017 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18647-2210/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-2210/.minikube}
	I0416 00:42:44.783572  369017 ubuntu.go:177] setting up certificates
	I0416 00:42:44.783603  369017 provision.go:84] configureAuth start
	I0416 00:42:44.783695  369017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-014065
	I0416 00:42:44.799622  369017 provision.go:143] copyHostCerts
	I0416 00:42:44.799700  369017 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-2210/.minikube/ca.pem, removing ...
	I0416 00:42:44.799715  369017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-2210/.minikube/ca.pem
	I0416 00:42:44.799795  369017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-2210/.minikube/ca.pem (1078 bytes)
	I0416 00:42:44.799954  369017 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-2210/.minikube/cert.pem, removing ...
	I0416 00:42:44.799967  369017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-2210/.minikube/cert.pem
	I0416 00:42:44.799997  369017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-2210/.minikube/cert.pem (1123 bytes)
	I0416 00:42:44.800058  369017 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-2210/.minikube/key.pem, removing ...
	I0416 00:42:44.800067  369017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-2210/.minikube/key.pem
	I0416 00:42:44.800093  369017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-2210/.minikube/key.pem (1679 bytes)
	I0416 00:42:44.800143  369017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-2210/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-014065 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-014065]
	I0416 00:42:45.012209  369017 provision.go:177] copyRemoteCerts
	I0416 00:42:45.012301  369017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:42:45.012349  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:45.062155  369017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/old-k8s-version-014065/id_rsa Username:docker}
	I0416 00:42:45.200098  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 00:42:45.258251  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 00:42:45.340053  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:42:45.389453  369017 provision.go:87] duration metric: took 605.820474ms to configureAuth
	I0416 00:42:45.389480  369017 ubuntu.go:193] setting minikube options for container-runtime
	I0416 00:42:45.389708  369017 config.go:182] Loaded profile config "old-k8s-version-014065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0416 00:42:45.389862  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:45.419055  369017 main.go:141] libmachine: Using SSH client type: native
	I0416 00:42:45.419344  369017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0416 00:42:45.419355  369017 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 00:42:45.576308  369017 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0416 00:42:45.576331  369017 ubuntu.go:71] root file system type: overlay
	I0416 00:42:45.576452  369017 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 00:42:45.576526  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:45.593791  369017 main.go:141] libmachine: Using SSH client type: native
	I0416 00:42:45.594064  369017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0416 00:42:45.594148  369017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 00:42:45.755944  369017 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 00:42:45.756059  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:45.774814  369017 main.go:141] libmachine: Using SSH client type: native
	I0416 00:42:45.775072  369017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0416 00:42:45.775096  369017 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 00:42:45.929028  369017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:42:45.929054  369017 machine.go:97] duration metric: took 4.651142154s to provisionDockerMachine
	I0416 00:42:45.929067  369017 start.go:293] postStartSetup for "old-k8s-version-014065" (driver="docker")
	I0416 00:42:45.929078  369017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:42:45.929159  369017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:42:45.929201  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:45.949654  369017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/old-k8s-version-014065/id_rsa Username:docker}
	I0416 00:42:46.053819  369017 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:42:46.057318  369017 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0416 00:42:46.057362  369017 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0416 00:42:46.057376  369017 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0416 00:42:46.057382  369017 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0416 00:42:46.057392  369017 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-2210/.minikube/addons for local assets ...
	I0416 00:42:46.057454  369017 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-2210/.minikube/files for local assets ...
	I0416 00:42:46.057528  369017 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-2210/.minikube/files/etc/ssl/certs/75632.pem -> 75632.pem in /etc/ssl/certs
	I0416 00:42:46.057647  369017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:42:46.067853  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/files/etc/ssl/certs/75632.pem --> /etc/ssl/certs/75632.pem (1708 bytes)
	I0416 00:42:46.095193  369017 start.go:296] duration metric: took 166.111499ms for postStartSetup
	I0416 00:42:46.095362  369017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:42:46.095428  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:46.110487  369017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/old-k8s-version-014065/id_rsa Username:docker}
	I0416 00:42:46.212470  369017 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0416 00:42:46.217233  369017 fix.go:56] duration metric: took 5.333784607s for fixHost
	I0416 00:42:46.217256  369017 start.go:83] releasing machines lock for "old-k8s-version-014065", held for 5.333835855s
	I0416 00:42:46.217339  369017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-014065
	I0416 00:42:46.235046  369017 ssh_runner.go:195] Run: cat /version.json
	I0416 00:42:46.235139  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:46.235423  369017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:42:46.235471  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:46.252418  369017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/old-k8s-version-014065/id_rsa Username:docker}
	I0416 00:42:46.260658  369017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/old-k8s-version-014065/id_rsa Username:docker}
	I0416 00:42:46.350842  369017 ssh_runner.go:195] Run: systemctl --version
	I0416 00:42:46.478569  369017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 00:42:46.483752  369017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0416 00:42:46.505023  369017 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0416 00:42:46.505119  369017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0416 00:42:46.523572  369017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0416 00:42:46.541763  369017 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:42:46.541793  369017 start.go:494] detecting cgroup driver to use...
	I0416 00:42:46.541827  369017 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0416 00:42:46.541929  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:42:46.559907  369017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0416 00:42:46.570407  369017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 00:42:46.581203  369017 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 00:42:46.581266  369017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 00:42:46.597430  369017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 00:42:46.609876  369017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 00:42:46.623569  369017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 00:42:46.639861  369017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:42:46.650339  369017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 00:42:46.660506  369017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:42:46.669239  369017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:42:46.687918  369017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:42:46.773278  369017 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 00:42:46.878574  369017 start.go:494] detecting cgroup driver to use...
	I0416 00:42:46.878672  369017 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0416 00:42:46.878756  369017 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 00:42:46.892579  369017 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0416 00:42:46.892712  369017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 00:42:46.906960  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:42:46.946120  369017 ssh_runner.go:195] Run: which cri-dockerd
	I0416 00:42:46.950034  369017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 00:42:46.960271  369017 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 00:42:46.982839  369017 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 00:42:47.085110  369017 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 00:42:47.196394  369017 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 00:42:47.196595  369017 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 00:42:47.218519  369017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:42:47.333055  369017 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 00:42:47.743980  369017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 00:42:47.766270  369017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 00:42:47.791635  369017 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 26.0.1 ...
	I0416 00:42:47.791755  369017 cli_runner.go:164] Run: docker network inspect old-k8s-version-014065 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0416 00:42:47.814873  369017 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0416 00:42:47.818679  369017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:42:47.829749  369017 kubeadm.go:877] updating cluster {Name:old-k8s-version-014065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014065 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:42:47.829883  369017 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0416 00:42:47.829942  369017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 00:42:47.846608  369017 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/pause:3.2
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0416 00:42:47.846642  369017 docker.go:615] Images already preloaded, skipping extraction
	I0416 00:42:47.846711  369017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 00:42:47.864960  369017 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0416 00:42:47.864983  369017 cache_images.go:84] Images are preloaded, skipping loading
	I0416 00:42:47.864994  369017 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 docker true true} ...
	I0416 00:42:47.865112  369017 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-014065 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014065 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:42:47.865188  369017 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 00:42:47.915114  369017 cni.go:84] Creating CNI manager for ""
	I0416 00:42:47.915146  369017 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0416 00:42:47.915155  369017 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:42:47.915284  369017 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-014065 NodeName:old-k8s-version-014065 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 00:42:47.915465  369017 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-014065"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:42:47.915539  369017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 00:42:47.924590  369017 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:42:47.924664  369017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:42:47.933468  369017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0416 00:42:47.956820  369017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 00:42:47.976296  369017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0416 00:42:47.996472  369017 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0416 00:42:48.000377  369017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:42:48.019281  369017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:42:48.118866  369017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:42:48.134694  369017 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065 for IP: 192.168.76.2
	I0416 00:42:48.134719  369017 certs.go:194] generating shared ca certs ...
	I0416 00:42:48.134736  369017 certs.go:226] acquiring lock for ca certs: {Name:mk0f2c276f9ccc821c50906b5561fa26a27a6ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:42:48.134898  369017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-2210/.minikube/ca.key
	I0416 00:42:48.134947  369017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.key
	I0416 00:42:48.134959  369017 certs.go:256] generating profile certs ...
	I0416 00:42:48.135056  369017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.key
	I0416 00:42:48.135128  369017 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/apiserver.key.04662e63
	I0416 00:42:48.135179  369017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/proxy-client.key
	I0416 00:42:48.135393  369017 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/7563.pem (1338 bytes)
	W0416 00:42:48.135436  369017 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-2210/.minikube/certs/7563_empty.pem, impossibly tiny 0 bytes
	I0416 00:42:48.135449  369017 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:42:48.135475  369017 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem (1078 bytes)
	I0416 00:42:48.135503  369017 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:42:48.135537  369017 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/key.pem (1679 bytes)
	I0416 00:42:48.135589  369017 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/files/etc/ssl/certs/75632.pem (1708 bytes)
	I0416 00:42:48.136255  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:42:48.175106  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:42:48.203877  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:42:48.230738  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 00:42:48.256640  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 00:42:48.282240  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:42:48.334891  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:42:48.380151  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 00:42:48.425926  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:42:48.459841  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/certs/7563.pem --> /usr/share/ca-certificates/7563.pem (1338 bytes)
	I0416 00:42:48.494467  369017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/files/etc/ssl/certs/75632.pem --> /usr/share/ca-certificates/75632.pem (1708 bytes)
	I0416 00:42:48.533025  369017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:42:48.554900  369017 ssh_runner.go:195] Run: openssl version
	I0416 00:42:48.561486  369017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75632.pem && ln -fs /usr/share/ca-certificates/75632.pem /etc/ssl/certs/75632.pem"
	I0416 00:42:48.580503  369017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75632.pem
	I0416 00:42:48.585164  369017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:43 /usr/share/ca-certificates/75632.pem
	I0416 00:42:48.585281  369017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75632.pem
	I0416 00:42:48.593724  369017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75632.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:42:48.606639  369017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:42:48.616860  369017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:42:48.620911  369017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:42:48.621026  369017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:42:48.629276  369017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:42:48.638872  369017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7563.pem && ln -fs /usr/share/ca-certificates/7563.pem /etc/ssl/certs/7563.pem"
	I0416 00:42:48.649249  369017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7563.pem
	I0416 00:42:48.652739  369017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:43 /usr/share/ca-certificates/7563.pem
	I0416 00:42:48.652854  369017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7563.pem
	I0416 00:42:48.659891  369017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7563.pem /etc/ssl/certs/51391683.0"
	I0416 00:42:48.672364  369017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:42:48.676371  369017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:42:48.684325  369017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:42:48.691452  369017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:42:48.698366  369017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:42:48.705601  369017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:42:48.713057  369017 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:42:48.720399  369017 kubeadm.go:391] StartCluster: {Name:old-k8s-version-014065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-014065 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:42:48.720577  369017 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 00:42:48.736357  369017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 00:42:48.745966  369017 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 00:42:48.746065  369017 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 00:42:48.746086  369017 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 00:42:48.746155  369017 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 00:42:48.755569  369017 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:42:48.756197  369017 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-014065" does not appear in /home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0416 00:42:48.756459  369017 kubeconfig.go:62] /home/jenkins/minikube-integration/18647-2210/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-014065" cluster setting kubeconfig missing "old-k8s-version-014065" context setting]
	I0416 00:42:48.756939  369017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/kubeconfig: {Name:mk2a4b2f2d98970b43b7e481fd26cc76bda92838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:42:48.758275  369017 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 00:42:48.768640  369017 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0416 00:42:48.768740  369017 kubeadm.go:591] duration metric: took 22.631028ms to restartPrimaryControlPlane
	I0416 00:42:48.768758  369017 kubeadm.go:393] duration metric: took 48.375226ms to StartCluster
	I0416 00:42:48.768775  369017 settings.go:142] acquiring lock: {Name:mkad41a04993d6fe82f2e16230c6052d1c68b809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:42:48.768875  369017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0416 00:42:48.769900  369017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/kubeconfig: {Name:mk2a4b2f2d98970b43b7e481fd26cc76bda92838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:42:48.770202  369017 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 00:42:48.773565  369017 out.go:177] * Verifying Kubernetes components...
	I0416 00:42:48.770442  369017 config.go:182] Loaded profile config "old-k8s-version-014065": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0416 00:42:48.770456  369017 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 00:42:48.775486  369017 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-014065"
	I0416 00:42:48.775506  369017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:42:48.775517  369017 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-014065"
	W0416 00:42:48.775524  369017 addons.go:243] addon storage-provisioner should already be in state true
	I0416 00:42:48.775564  369017 host.go:66] Checking if "old-k8s-version-014065" exists ...
	I0416 00:42:48.775599  369017 addons.go:69] Setting dashboard=true in profile "old-k8s-version-014065"
	I0416 00:42:48.775624  369017 addons.go:234] Setting addon dashboard=true in "old-k8s-version-014065"
	W0416 00:42:48.775630  369017 addons.go:243] addon dashboard should already be in state true
	I0416 00:42:48.775663  369017 host.go:66] Checking if "old-k8s-version-014065" exists ...
	I0416 00:42:48.776009  369017 cli_runner.go:164] Run: docker container inspect old-k8s-version-014065 --format={{.State.Status}}
	I0416 00:42:48.776084  369017 cli_runner.go:164] Run: docker container inspect old-k8s-version-014065 --format={{.State.Status}}
	I0416 00:42:48.777381  369017 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-014065"
	I0416 00:42:48.777431  369017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-014065"
	I0416 00:42:48.777754  369017 cli_runner.go:164] Run: docker container inspect old-k8s-version-014065 --format={{.State.Status}}
	I0416 00:42:48.778044  369017 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-014065"
	I0416 00:42:48.778077  369017 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-014065"
	W0416 00:42:48.778085  369017 addons.go:243] addon metrics-server should already be in state true
	I0416 00:42:48.778113  369017 host.go:66] Checking if "old-k8s-version-014065" exists ...
	I0416 00:42:48.778490  369017 cli_runner.go:164] Run: docker container inspect old-k8s-version-014065 --format={{.State.Status}}
	I0416 00:42:48.833729  369017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:42:48.836206  369017 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0416 00:42:48.837388  369017 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 00:42:48.841447  369017 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 00:42:48.839371  369017 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0416 00:42:48.839393  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 00:42:48.840475  369017 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-014065"
	W0416 00:42:48.849091  369017 addons.go:243] addon default-storageclass should already be in state true
	I0416 00:42:48.849127  369017 host.go:66] Checking if "old-k8s-version-014065" exists ...
	I0416 00:42:48.849146  369017 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 00:42:48.849165  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 00:42:48.849230  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:48.849565  369017 cli_runner.go:164] Run: docker container inspect old-k8s-version-014065 --format={{.State.Status}}
	I0416 00:42:48.852331  369017 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0416 00:42:48.852359  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0416 00:42:48.850483  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:48.852426  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:48.893378  369017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/old-k8s-version-014065/id_rsa Username:docker}
	I0416 00:42:48.903404  369017 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 00:42:48.903423  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 00:42:48.903510  369017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-014065
	I0416 00:42:48.907517  369017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/old-k8s-version-014065/id_rsa Username:docker}
	I0416 00:42:48.912527  369017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/old-k8s-version-014065/id_rsa Username:docker}
	I0416 00:42:48.940743  369017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/old-k8s-version-014065/id_rsa Username:docker}
	I0416 00:42:48.967087  369017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:42:48.994875  369017 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-014065" to be "Ready" ...
	I0416 00:42:49.037169  369017 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 00:42:49.037191  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 00:42:49.056713  369017 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 00:42:49.056784  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 00:42:49.090608  369017 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 00:42:49.090680  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 00:42:49.099051  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 00:42:49.112879  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 00:42:49.114303  369017 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0416 00:42:49.114363  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0416 00:42:49.158880  369017 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0416 00:42:49.158952  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0416 00:42:49.166292  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 00:42:49.223658  369017 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0416 00:42:49.223730  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0416 00:42:49.292760  369017 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0416 00:42:49.292836  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0416 00:42:49.312936  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.313015  369017 retry.go:31] will retry after 276.652953ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0416 00:42:49.312884  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.313115  369017 retry.go:31] will retry after 253.608697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.323674  369017 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0416 00:42:49.323748  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0416 00:42:49.349695  369017 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0416 00:42:49.349770  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0416 00:42:49.363130  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.363223  369017 retry.go:31] will retry after 220.114105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.371991  369017 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0416 00:42:49.372015  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0416 00:42:49.391025  369017 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0416 00:42:49.391048  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0416 00:42:49.409859  369017 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0416 00:42:49.409884  369017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0416 00:42:49.430528  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0416 00:42:49.508649  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.508693  369017 retry.go:31] will retry after 319.576621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.567865  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0416 00:42:49.584337  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 00:42:49.589864  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0416 00:42:49.717337  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.717389  369017 retry.go:31] will retry after 342.735867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0416 00:42:49.737636  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.737755  369017 retry.go:31] will retry after 480.62207ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0416 00:42:49.737761  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.737828  369017 retry.go:31] will retry after 206.909322ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.828801  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0416 00:42:49.900438  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.900493  369017 retry.go:31] will retry after 504.091881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:49.945634  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0416 00:42:50.033975  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.034012  369017 retry.go:31] will retry after 428.956014ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.061210  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0416 00:42:50.137117  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.137148  369017 retry.go:31] will retry after 403.493122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.219467  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0416 00:42:50.291358  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.291395  369017 retry.go:31] will retry after 561.688859ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.405721  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0416 00:42:50.464159  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0416 00:42:50.496685  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.496715  369017 retry.go:31] will retry after 830.217116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0416 00:42:50.540203  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.540301  369017 retry.go:31] will retry after 753.850811ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.541337  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0416 00:42:50.620859  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.620927  369017 retry.go:31] will retry after 1.219239665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.854313  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0416 00:42:50.935888  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.935920  369017 retry.go:31] will retry after 491.996337ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:50.995614  369017 node_ready.go:53] error getting node "old-k8s-version-014065": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-014065": dial tcp 192.168.76.2:8443: connect: connection refused
	I0416 00:42:51.295050  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 00:42:51.327532  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0416 00:42:51.394513  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:51.394598  369017 retry.go:31] will retry after 1.268284118s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0416 00:42:51.427769  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:51.427806  369017 retry.go:31] will retry after 849.929226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:51.428856  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0416 00:42:51.503508  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:51.503546  369017 retry.go:31] will retry after 1.404901225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:51.841160  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0416 00:42:51.991915  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:51.991995  369017 retry.go:31] will retry after 1.160838179s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:52.277919  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0416 00:42:52.381657  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:52.381696  369017 retry.go:31] will retry after 1.656338858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:52.663522  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0416 00:42:52.846688  369017 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:52.846721  369017 retry.go:31] will retry after 2.009302959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0416 00:42:52.908988  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 00:42:52.995952  369017 node_ready.go:53] error getting node "old-k8s-version-014065": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-014065": dial tcp 192.168.76.2:8443: connect: connection refused
	I0416 00:42:53.153349  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0416 00:42:54.039038  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0416 00:42:54.856372  369017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 00:43:00.919485  369017 node_ready.go:49] node "old-k8s-version-014065" has status "Ready":"True"
	I0416 00:43:00.919508  369017 node_ready.go:38] duration metric: took 11.924597466s for node "old-k8s-version-014065" to be "Ready" ...
	I0416 00:43:00.919518  369017 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 00:43:01.167938  369017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-ftt5t" in "kube-system" namespace to be "Ready" ...
	I0416 00:43:01.412694  369017 pod_ready.go:92] pod "coredns-74ff55c5b-ftt5t" in "kube-system" namespace has status "Ready":"True"
	I0416 00:43:01.412771  369017 pod_ready.go:81] duration metric: took 232.943583ms for pod "coredns-74ff55c5b-ftt5t" in "kube-system" namespace to be "Ready" ...
	I0416 00:43:01.412797  369017 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-014065" in "kube-system" namespace to be "Ready" ...
	I0416 00:43:01.513631  369017 pod_ready.go:92] pod "etcd-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"True"
	I0416 00:43:01.513710  369017 pod_ready.go:81] duration metric: took 100.892073ms for pod "etcd-old-k8s-version-014065" in "kube-system" namespace to be "Ready" ...
	I0416 00:43:01.513738  369017 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-014065" in "kube-system" namespace to be "Ready" ...
	I0416 00:43:02.655486  369017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.746450101s)
	I0416 00:43:02.655584  369017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.502201615s)
	I0416 00:43:02.655824  369017 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-014065"
	I0416 00:43:03.292436  369017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.253350544s)
	I0416 00:43:03.295982  369017 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-014065 addons enable metrics-server
	
	I0416 00:43:03.292746  369017 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.436349034s)
	I0416 00:43:03.300599  369017 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	I0416 00:43:03.302376  369017 addons.go:505] duration metric: took 14.531905703s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
	I0416 00:43:03.521246  369017 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:06.024246  369017 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"True"
	I0416 00:43:06.024279  369017 pod_ready.go:81] duration metric: took 4.510497756s for pod "kube-apiserver-old-k8s-version-014065" in "kube-system" namespace to be "Ready" ...
	I0416 00:43:06.024293  369017 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace to be "Ready" ...
	I0416 00:43:08.030195  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:10.033316  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:12.530165  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:14.531052  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:17.031906  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:19.611876  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:22.031623  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:24.032525  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:26.530649  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:28.530928  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:30.531241  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:32.531599  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:35.031841  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:37.032236  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:39.530653  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:41.531003  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:44.031100  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:46.532344  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:49.031085  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:51.031656  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:53.031817  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:55.035014  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:43:57.531030  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:00.142213  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:02.532405  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:05.032266  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:07.530814  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:10.038046  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:12.530831  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:14.530998  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:16.531449  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:18.532278  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:21.030934  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:23.032547  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:25.531445  369017 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:27.031107  369017 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"True"
	I0416 00:44:27.031136  369017 pod_ready.go:81] duration metric: took 1m21.006834629s for pod "kube-controller-manager-old-k8s-version-014065" in "kube-system" namespace to be "Ready" ...
	I0416 00:44:27.031148  369017 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2ltgk" in "kube-system" namespace to be "Ready" ...
	I0416 00:44:27.037549  369017 pod_ready.go:92] pod "kube-proxy-2ltgk" in "kube-system" namespace has status "Ready":"True"
	I0416 00:44:27.037633  369017 pod_ready.go:81] duration metric: took 6.456151ms for pod "kube-proxy-2ltgk" in "kube-system" namespace to be "Ready" ...
	I0416 00:44:27.037649  369017 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-014065" in "kube-system" namespace to be "Ready" ...
	I0416 00:44:27.043793  369017 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-014065" in "kube-system" namespace has status "Ready":"True"
	I0416 00:44:27.043816  369017 pod_ready.go:81] duration metric: took 6.159329ms for pod "kube-scheduler-old-k8s-version-014065" in "kube-system" namespace to be "Ready" ...
	I0416 00:44:27.043829  369017 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace to be "Ready" ...
	I0416 00:44:29.050672  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:31.549733  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:33.550168  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:36.051016  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:38.549215  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:40.551552  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:43.052505  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:45.063320  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:47.551081  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:50.050986  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:52.549641  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:54.550043  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:57.050475  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:44:59.051409  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:01.051769  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:03.550075  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:06.051560  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:08.549431  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:10.550492  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:13.050825  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:15.060133  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:17.550402  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:20.053548  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:22.550057  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:24.550163  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:27.051853  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:29.550263  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:32.050455  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:34.051181  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:36.054970  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:38.055630  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:40.550076  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:42.550172  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:45.056828  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:47.549401  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:50.052078  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:52.088288  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:54.550454  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:57.050133  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:45:59.052754  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:01.550182  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:04.050585  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:06.056869  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:08.549732  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:10.549942  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:12.550763  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:14.552330  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:17.051662  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:19.550206  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:22.049924  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:24.051476  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:26.550936  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:29.050835  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:31.549907  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:33.550519  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:36.050609  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:38.057436  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:40.550723  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:42.596930  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:45.067901  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:47.550192  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:49.550754  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:52.052757  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:54.550808  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:57.051042  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:46:59.052507  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:01.552591  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:04.050376  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:06.551219  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:09.056905  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:11.057732  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:13.550199  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:16.050496  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:18.051509  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:20.549970  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:22.550237  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:24.551772  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:27.051148  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:29.550234  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:32.051368  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:34.549963  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:36.551305  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:39.050909  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:41.060958  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:43.550338  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:45.551018  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:47.553300  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:50.050920  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:52.052167  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:54.550341  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:56.550842  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:58.553866  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:00.566529  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:03.055439  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:05.549612  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:08.050814  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:10.051128  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:12.550061  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:15.053238  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:17.553163  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:20.050973  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:22.053952  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:24.549677  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:26.550323  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:27.050917  369017 pod_ready.go:81] duration metric: took 4m0.007077036s for pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace to be "Ready" ...
	E0416 00:48:27.050943  369017 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0416 00:48:27.050953  369017 pod_ready.go:38] duration metric: took 5m26.131424924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 00:48:27.050972  369017 api_server.go:52] waiting for apiserver process to appear ...
	I0416 00:48:27.051054  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0416 00:48:27.073537  369017 logs.go:276] 2 containers: [a7d7845d2402 b8ea3fa2ab02]
	I0416 00:48:27.073627  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0416 00:48:27.092597  369017 logs.go:276] 2 containers: [33107d331e0b fd5230a8d74b]
	I0416 00:48:27.092711  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0416 00:48:27.123060  369017 logs.go:276] 2 containers: [65e7340af5ef 697870ff99a4]
	I0416 00:48:27.123158  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0416 00:48:27.141834  369017 logs.go:276] 2 containers: [2d7d1b9e8353 7b437d823755]
	I0416 00:48:27.141920  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0416 00:48:27.160491  369017 logs.go:276] 2 containers: [fa54eb276fa9 bf3ceb2acadb]
	I0416 00:48:27.160587  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0416 00:48:27.180907  369017 logs.go:276] 2 containers: [b3c3c455ea1c 4cc3ed1cf27e]
	I0416 00:48:27.180992  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0416 00:48:27.198507  369017 logs.go:276] 0 containers: []
	W0416 00:48:27.198530  369017 logs.go:278] No container was found matching "kindnet"
	I0416 00:48:27.198587  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0416 00:48:27.216044  369017 logs.go:276] 1 containers: [c311fb93e11b]
	I0416 00:48:27.216241  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0416 00:48:27.232689  369017 logs.go:276] 2 containers: [c5eb3f5fa95a d8e8f85e95c4]
	I0416 00:48:27.232722  369017 logs.go:123] Gathering logs for kubernetes-dashboard [c311fb93e11b] ...
	I0416 00:48:27.232735  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c311fb93e11b"
	I0416 00:48:27.254742  369017 logs.go:123] Gathering logs for kube-apiserver [a7d7845d2402] ...
	I0416 00:48:27.254816  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d7845d2402"
	I0416 00:48:27.299302  369017 logs.go:123] Gathering logs for coredns [65e7340af5ef] ...
	I0416 00:48:27.299340  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65e7340af5ef"
	I0416 00:48:27.327427  369017 logs.go:123] Gathering logs for kube-scheduler [7b437d823755] ...
	I0416 00:48:27.327461  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b437d823755"
	I0416 00:48:27.354603  369017 logs.go:123] Gathering logs for kube-proxy [bf3ceb2acadb] ...
	I0416 00:48:27.354636  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3ceb2acadb"
	I0416 00:48:27.377974  369017 logs.go:123] Gathering logs for storage-provisioner [c5eb3f5fa95a] ...
	I0416 00:48:27.378013  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5eb3f5fa95a"
	I0416 00:48:27.400538  369017 logs.go:123] Gathering logs for storage-provisioner [d8e8f85e95c4] ...
	I0416 00:48:27.400566  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8e8f85e95c4"
	I0416 00:48:27.421475  369017 logs.go:123] Gathering logs for container status ...
	I0416 00:48:27.421505  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 00:48:27.501109  369017 logs.go:123] Gathering logs for kubelet ...
	I0416 00:48:27.501142  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0416 00:48:27.578790  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:03 old-k8s-version-014065 kubelet[1223]: E0416 00:43:03.441058    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:27.580653  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:04 old-k8s-version-014065 kubelet[1223]: E0416 00:43:04.639437    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.583270  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:15 old-k8s-version-014065 kubelet[1223]: E0416 00:43:15.964300    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:27.592833  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:20 old-k8s-version-014065 kubelet[1223]: E0416 00:43:20.969599    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:27.593059  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:21 old-k8s-version-014065 kubelet[1223]: E0416 00:43:21.832458    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.594338  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:26 old-k8s-version-014065 kubelet[1223]: E0416 00:43:26.931292    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.597710  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:33 old-k8s-version-014065 kubelet[1223]: E0416 00:43:33.428287    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:27.598545  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:33 old-k8s-version-014065 kubelet[1223]: E0416 00:43:33.976063    1223 pod_workers.go:191] Error syncing pod cef2692c-ceee-4c9c-892a-75dcaae5ab8a ("storage-provisioner_kube-system(cef2692c-ceee-4c9c-892a-75dcaae5ab8a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cef2692c-ceee-4c9c-892a-75dcaae5ab8a)"
	W0416 00:48:27.600747  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:38 old-k8s-version-014065 kubelet[1223]: E0416 00:43:38.955013    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:27.601295  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:46 old-k8s-version-014065 kubelet[1223]: E0416 00:43:46.931311    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.601614  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:53 old-k8s-version-014065 kubelet[1223]: E0416 00:43:53.963232    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.604001  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:02 old-k8s-version-014065 kubelet[1223]: E0416 00:44:02.430723    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:27.604206  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:07 old-k8s-version-014065 kubelet[1223]: E0416 00:44:07.936331    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.604402  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:14 old-k8s-version-014065 kubelet[1223]: E0416 00:44:14.931692    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.606653  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:20 old-k8s-version-014065 kubelet[1223]: E0416 00:44:20.968241    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:27.606863  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:25 old-k8s-version-014065 kubelet[1223]: E0416 00:44:25.952535    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.607048  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:32 old-k8s-version-014065 kubelet[1223]: E0416 00:44:32.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.607278  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:40 old-k8s-version-014065 kubelet[1223]: E0416 00:44:40.931086    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.607466  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:46 old-k8s-version-014065 kubelet[1223]: E0416 00:44:46.931443    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.609801  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:54 old-k8s-version-014065 kubelet[1223]: E0416 00:44:54.366332    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:27.610002  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:01 old-k8s-version-014065 kubelet[1223]: E0416 00:45:01.931372    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.610273  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:08 old-k8s-version-014065 kubelet[1223]: E0416 00:45:08.933619    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.610493  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:12 old-k8s-version-014065 kubelet[1223]: E0416 00:45:12.931273    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.610681  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:23 old-k8s-version-014065 kubelet[1223]: E0416 00:45:23.931035    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.610917  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:23 old-k8s-version-014065 kubelet[1223]: E0416 00:45:23.953472    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.611117  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:35 old-k8s-version-014065 kubelet[1223]: E0416 00:45:35.932086    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.611363  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:38 old-k8s-version-014065 kubelet[1223]: E0416 00:45:38.937563    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.613506  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:49 old-k8s-version-014065 kubelet[1223]: E0416 00:45:49.959737    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:27.613707  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:50 old-k8s-version-014065 kubelet[1223]: E0416 00:45:50.931084    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.613892  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:02 old-k8s-version-014065 kubelet[1223]: E0416 00:46:02.934939    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.614087  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:04 old-k8s-version-014065 kubelet[1223]: E0416 00:46:04.931137    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.614275  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:14 old-k8s-version-014065 kubelet[1223]: E0416 00:46:14.942668    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.616488  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:16 old-k8s-version-014065 kubelet[1223]: E0416 00:46:16.403347    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:27.616677  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:27 old-k8s-version-014065 kubelet[1223]: E0416 00:46:27.930989    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.616873  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:30 old-k8s-version-014065 kubelet[1223]: E0416 00:46:30.931502    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.617084  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:40 old-k8s-version-014065 kubelet[1223]: E0416 00:46:40.931581    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.617280  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:45 old-k8s-version-014065 kubelet[1223]: E0416 00:46:45.971319    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.617464  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:53 old-k8s-version-014065 kubelet[1223]: E0416 00:46:53.934295    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.617670  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:56 old-k8s-version-014065 kubelet[1223]: E0416 00:46:56.930865    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.617855  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:05 old-k8s-version-014065 kubelet[1223]: E0416 00:47:05.931075    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.618050  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:10 old-k8s-version-014065 kubelet[1223]: E0416 00:47:10.931300    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.618235  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:20 old-k8s-version-014065 kubelet[1223]: E0416 00:47:20.932206    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.618431  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:25 old-k8s-version-014065 kubelet[1223]: E0416 00:47:25.931180    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.618616  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:31 old-k8s-version-014065 kubelet[1223]: E0416 00:47:31.931916    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.618840  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:37 old-k8s-version-014065 kubelet[1223]: E0416 00:47:37.931161    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.619026  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:44 old-k8s-version-014065 kubelet[1223]: E0416 00:47:44.931376    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.619351  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:48 old-k8s-version-014065 kubelet[1223]: E0416 00:47:48.937314    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.619548  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:58 old-k8s-version-014065 kubelet[1223]: E0416 00:47:58.931861    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.619745  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.619929  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.620123  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.620308  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0416 00:48:27.620319  369017 logs.go:123] Gathering logs for kube-apiserver [b8ea3fa2ab02] ...
	I0416 00:48:27.620334  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ea3fa2ab02"
	I0416 00:48:27.698206  369017 logs.go:123] Gathering logs for coredns [697870ff99a4] ...
	I0416 00:48:27.698250  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697870ff99a4"
	I0416 00:48:27.720130  369017 logs.go:123] Gathering logs for kube-scheduler [2d7d1b9e8353] ...
	I0416 00:48:27.720164  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7d1b9e8353"
	I0416 00:48:27.741457  369017 logs.go:123] Gathering logs for dmesg ...
	I0416 00:48:27.741485  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 00:48:27.764244  369017 logs.go:123] Gathering logs for describe nodes ...
	I0416 00:48:27.764279  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0416 00:48:27.930626  369017 logs.go:123] Gathering logs for kube-proxy [fa54eb276fa9] ...
	I0416 00:48:27.930654  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa54eb276fa9"
	I0416 00:48:27.962973  369017 logs.go:123] Gathering logs for Docker ...
	I0416 00:48:27.963046  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0416 00:48:27.999906  369017 logs.go:123] Gathering logs for etcd [33107d331e0b] ...
	I0416 00:48:27.999940  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33107d331e0b"
	I0416 00:48:28.028776  369017 logs.go:123] Gathering logs for etcd [fd5230a8d74b] ...
	I0416 00:48:28.028817  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd5230a8d74b"
	I0416 00:48:28.064939  369017 logs.go:123] Gathering logs for kube-controller-manager [b3c3c455ea1c] ...
	I0416 00:48:28.064970  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c3c455ea1c"
	I0416 00:48:28.117348  369017 logs.go:123] Gathering logs for kube-controller-manager [4cc3ed1cf27e] ...
	I0416 00:48:28.117379  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cc3ed1cf27e"
	I0416 00:48:28.175750  369017 out.go:304] Setting ErrFile to fd 2...
	I0416 00:48:28.175783  369017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0416 00:48:28.175846  369017 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0416 00:48:28.175861  369017 out.go:239]   Apr 16 00:47:58 old-k8s-version-014065 kubelet[1223]: E0416 00:47:58.931861    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 16 00:47:58 old-k8s-version-014065 kubelet[1223]: E0416 00:47:58.931861    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:28.175875  369017 out.go:239]   Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:28.175885  369017 out.go:239]   Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:28.175896  369017 out.go:239]   Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:28.175904  369017 out.go:239]   Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0416 00:48:28.175912  369017 out.go:304] Setting ErrFile to fd 2...
	I0416 00:48:28.175920  369017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:48:38.176694  369017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:48:38.191932  369017 api_server.go:72] duration metric: took 5m49.421692349s to wait for apiserver process to appear ...
	I0416 00:48:38.191963  369017 api_server.go:88] waiting for apiserver healthz status ...
	I0416 00:48:38.192051  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0416 00:48:38.209623  369017 logs.go:276] 2 containers: [a7d7845d2402 b8ea3fa2ab02]
	I0416 00:48:38.209701  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0416 00:48:38.228837  369017 logs.go:276] 2 containers: [33107d331e0b fd5230a8d74b]
	I0416 00:48:38.228918  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0416 00:48:38.245852  369017 logs.go:276] 2 containers: [65e7340af5ef 697870ff99a4]
	I0416 00:48:38.245943  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0416 00:48:38.268054  369017 logs.go:276] 2 containers: [2d7d1b9e8353 7b437d823755]
	I0416 00:48:38.268136  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0416 00:48:38.292495  369017 logs.go:276] 2 containers: [fa54eb276fa9 bf3ceb2acadb]
	I0416 00:48:38.292571  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0416 00:48:38.314076  369017 logs.go:276] 2 containers: [b3c3c455ea1c 4cc3ed1cf27e]
	I0416 00:48:38.314160  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0416 00:48:38.330761  369017 logs.go:276] 0 containers: []
	W0416 00:48:38.330781  369017 logs.go:278] No container was found matching "kindnet"
	I0416 00:48:38.330834  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0416 00:48:38.349329  369017 logs.go:276] 2 containers: [c5eb3f5fa95a d8e8f85e95c4]
	I0416 00:48:38.349479  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0416 00:48:38.366973  369017 logs.go:276] 1 containers: [c311fb93e11b]
	I0416 00:48:38.367003  369017 logs.go:123] Gathering logs for kube-apiserver [b8ea3fa2ab02] ...
	I0416 00:48:38.367015  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ea3fa2ab02"
	I0416 00:48:38.422460  369017 logs.go:123] Gathering logs for etcd [33107d331e0b] ...
	I0416 00:48:38.422494  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33107d331e0b"
	I0416 00:48:38.459188  369017 logs.go:123] Gathering logs for kube-scheduler [2d7d1b9e8353] ...
	I0416 00:48:38.459287  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7d1b9e8353"
	I0416 00:48:38.488818  369017 logs.go:123] Gathering logs for kube-controller-manager [b3c3c455ea1c] ...
	I0416 00:48:38.488856  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c3c455ea1c"
	I0416 00:48:38.528135  369017 logs.go:123] Gathering logs for kubelet ...
	I0416 00:48:38.528170  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0416 00:48:38.584942  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:03 old-k8s-version-014065 kubelet[1223]: E0416 00:43:03.441058    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:38.586673  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:04 old-k8s-version-014065 kubelet[1223]: E0416 00:43:04.639437    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.589065  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:15 old-k8s-version-014065 kubelet[1223]: E0416 00:43:15.964300    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:38.593156  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:20 old-k8s-version-014065 kubelet[1223]: E0416 00:43:20.969599    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:38.593356  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:21 old-k8s-version-014065 kubelet[1223]: E0416 00:43:21.832458    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.594043  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:26 old-k8s-version-014065 kubelet[1223]: E0416 00:43:26.931292    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.596270  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:33 old-k8s-version-014065 kubelet[1223]: E0416 00:43:33.428287    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:38.597041  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:33 old-k8s-version-014065 kubelet[1223]: E0416 00:43:33.976063    1223 pod_workers.go:191] Error syncing pod cef2692c-ceee-4c9c-892a-75dcaae5ab8a ("storage-provisioner_kube-system(cef2692c-ceee-4c9c-892a-75dcaae5ab8a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cef2692c-ceee-4c9c-892a-75dcaae5ab8a)"
	W0416 00:48:38.599100  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:38 old-k8s-version-014065 kubelet[1223]: E0416 00:43:38.955013    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:38.599665  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:46 old-k8s-version-014065 kubelet[1223]: E0416 00:43:46.931311    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.599979  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:53 old-k8s-version-014065 kubelet[1223]: E0416 00:43:53.963232    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.602177  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:02 old-k8s-version-014065 kubelet[1223]: E0416 00:44:02.430723    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:38.602364  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:07 old-k8s-version-014065 kubelet[1223]: E0416 00:44:07.936331    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.602559  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:14 old-k8s-version-014065 kubelet[1223]: E0416 00:44:14.931692    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.604596  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:20 old-k8s-version-014065 kubelet[1223]: E0416 00:44:20.968241    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:38.604793  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:25 old-k8s-version-014065 kubelet[1223]: E0416 00:44:25.952535    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.604977  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:32 old-k8s-version-014065 kubelet[1223]: E0416 00:44:32.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.605172  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:40 old-k8s-version-014065 kubelet[1223]: E0416 00:44:40.931086    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.605356  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:46 old-k8s-version-014065 kubelet[1223]: E0416 00:44:46.931443    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.607588  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:54 old-k8s-version-014065 kubelet[1223]: E0416 00:44:54.366332    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:38.607775  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:01 old-k8s-version-014065 kubelet[1223]: E0416 00:45:01.931372    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.607984  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:08 old-k8s-version-014065 kubelet[1223]: E0416 00:45:08.933619    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.608167  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:12 old-k8s-version-014065 kubelet[1223]: E0416 00:45:12.931273    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.608349  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:23 old-k8s-version-014065 kubelet[1223]: E0416 00:45:23.931035    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.608545  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:23 old-k8s-version-014065 kubelet[1223]: E0416 00:45:23.953472    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.608728  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:35 old-k8s-version-014065 kubelet[1223]: E0416 00:45:35.932086    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.608930  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:38 old-k8s-version-014065 kubelet[1223]: E0416 00:45:38.937563    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.610973  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:49 old-k8s-version-014065 kubelet[1223]: E0416 00:45:49.959737    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:38.611166  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:50 old-k8s-version-014065 kubelet[1223]: E0416 00:45:50.931084    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.611382  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:02 old-k8s-version-014065 kubelet[1223]: E0416 00:46:02.934939    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.611578  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:04 old-k8s-version-014065 kubelet[1223]: E0416 00:46:04.931137    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.611762  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:14 old-k8s-version-014065 kubelet[1223]: E0416 00:46:14.942668    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.613954  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:16 old-k8s-version-014065 kubelet[1223]: E0416 00:46:16.403347    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:38.614138  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:27 old-k8s-version-014065 kubelet[1223]: E0416 00:46:27.930989    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.614333  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:30 old-k8s-version-014065 kubelet[1223]: E0416 00:46:30.931502    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.614519  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:40 old-k8s-version-014065 kubelet[1223]: E0416 00:46:40.931581    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.614712  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:45 old-k8s-version-014065 kubelet[1223]: E0416 00:46:45.971319    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.614911  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:53 old-k8s-version-014065 kubelet[1223]: E0416 00:46:53.934295    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.615106  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:56 old-k8s-version-014065 kubelet[1223]: E0416 00:46:56.930865    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.615296  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:05 old-k8s-version-014065 kubelet[1223]: E0416 00:47:05.931075    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.615490  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:10 old-k8s-version-014065 kubelet[1223]: E0416 00:47:10.931300    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.615673  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:20 old-k8s-version-014065 kubelet[1223]: E0416 00:47:20.932206    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.615866  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:25 old-k8s-version-014065 kubelet[1223]: E0416 00:47:25.931180    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.616049  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:31 old-k8s-version-014065 kubelet[1223]: E0416 00:47:31.931916    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.616243  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:37 old-k8s-version-014065 kubelet[1223]: E0416 00:47:37.931161    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.616448  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:44 old-k8s-version-014065 kubelet[1223]: E0416 00:47:44.931376    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.616644  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:48 old-k8s-version-014065 kubelet[1223]: E0416 00:47:48.937314    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.616827  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:58 old-k8s-version-014065 kubelet[1223]: E0416 00:47:58.931861    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.617022  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.617204  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.617401  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.617586  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.617782  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:33 old-k8s-version-014065 kubelet[1223]: E0416 00:48:33.935545    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0416 00:48:38.617792  369017 logs.go:123] Gathering logs for kube-apiserver [a7d7845d2402] ...
	I0416 00:48:38.617805  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d7845d2402"
	I0416 00:48:38.659232  369017 logs.go:123] Gathering logs for coredns [65e7340af5ef] ...
	I0416 00:48:38.659348  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65e7340af5ef"
	I0416 00:48:38.688264  369017 logs.go:123] Gathering logs for kube-proxy [fa54eb276fa9] ...
	I0416 00:48:38.688293  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa54eb276fa9"
	I0416 00:48:38.708927  369017 logs.go:123] Gathering logs for kube-proxy [bf3ceb2acadb] ...
	I0416 00:48:38.708955  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3ceb2acadb"
	I0416 00:48:38.729344  369017 logs.go:123] Gathering logs for kube-controller-manager [4cc3ed1cf27e] ...
	I0416 00:48:38.729374  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cc3ed1cf27e"
	I0416 00:48:38.771163  369017 logs.go:123] Gathering logs for Docker ...
	I0416 00:48:38.771228  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0416 00:48:38.802002  369017 logs.go:123] Gathering logs for dmesg ...
	I0416 00:48:38.802041  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 00:48:38.822471  369017 logs.go:123] Gathering logs for describe nodes ...
	I0416 00:48:38.822500  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0416 00:48:38.978993  369017 logs.go:123] Gathering logs for container status ...
	I0416 00:48:38.979023  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 00:48:39.074662  369017 logs.go:123] Gathering logs for coredns [697870ff99a4] ...
	I0416 00:48:39.074703  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697870ff99a4"
	I0416 00:48:39.100511  369017 logs.go:123] Gathering logs for storage-provisioner [d8e8f85e95c4] ...
	I0416 00:48:39.100545  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8e8f85e95c4"
	I0416 00:48:39.122784  369017 logs.go:123] Gathering logs for storage-provisioner [c5eb3f5fa95a] ...
	I0416 00:48:39.122813  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5eb3f5fa95a"
	I0416 00:48:39.144632  369017 logs.go:123] Gathering logs for kubernetes-dashboard [c311fb93e11b] ...
	I0416 00:48:39.144662  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c311fb93e11b"
	I0416 00:48:39.166870  369017 logs.go:123] Gathering logs for etcd [fd5230a8d74b] ...
	I0416 00:48:39.166901  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd5230a8d74b"
	I0416 00:48:39.190433  369017 logs.go:123] Gathering logs for kube-scheduler [7b437d823755] ...
	I0416 00:48:39.190464  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b437d823755"
	I0416 00:48:39.213048  369017 out.go:304] Setting ErrFile to fd 2...
	I0416 00:48:39.213073  369017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0416 00:48:39.213162  369017 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0416 00:48:39.213174  369017 out.go:239]   Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:39.213185  369017 out.go:239]   Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:39.213207  369017 out.go:239]   Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:39.213217  369017 out.go:239]   Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:39.213223  369017 out.go:239]   Apr 16 00:48:33 old-k8s-version-014065 kubelet[1223]: E0416 00:48:33.935545    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 16 00:48:33 old-k8s-version-014065 kubelet[1223]: E0416 00:48:33.935545    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0416 00:48:39.213235  369017 out.go:304] Setting ErrFile to fd 2...
	I0416 00:48:39.213242  369017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:48:49.213969  369017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0416 00:48:49.227100  369017 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0416 00:48:49.230281  369017 out.go:177] 
	W0416 00:48:49.232609  369017 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0416 00:48:49.232667  369017 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0416 00:48:49.232693  369017 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0416 00:48:49.232702  369017 out.go:239] * 
	* 
	W0416 00:48:49.234077  369017 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 00:48:49.236416  369017 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-014065 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-014065
helpers_test.go:235: (dbg) docker inspect old-k8s-version-014065:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "452df2b37af5461b3847af989f157738d2b001fb3a94037a0b53ad327cfc3f60",
	        "Created": "2024-04-16T00:39:40.987999443Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 369218,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-16T00:42:41.221775429Z",
	            "FinishedAt": "2024-04-16T00:42:40.122892043Z"
	        },
	        "Image": "sha256:05b5b2cbc7157bfe11e03d0beeaf25e36e83e7ad2b499390548ca8693c4ec20b",
	        "ResolvConfPath": "/var/lib/docker/containers/452df2b37af5461b3847af989f157738d2b001fb3a94037a0b53ad327cfc3f60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/452df2b37af5461b3847af989f157738d2b001fb3a94037a0b53ad327cfc3f60/hostname",
	        "HostsPath": "/var/lib/docker/containers/452df2b37af5461b3847af989f157738d2b001fb3a94037a0b53ad327cfc3f60/hosts",
	        "LogPath": "/var/lib/docker/containers/452df2b37af5461b3847af989f157738d2b001fb3a94037a0b53ad327cfc3f60/452df2b37af5461b3847af989f157738d2b001fb3a94037a0b53ad327cfc3f60-json.log",
	        "Name": "/old-k8s-version-014065",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-014065:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-014065",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e5e795d5efe9e3395660beeb319a913dd91648822a94033fc56d569f00ea12dd-init/diff:/var/lib/docker/overlay2/d2fb7d5dfad483877edf794e760fbf311a1d68be07bb2438f714c78875e64b61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5e795d5efe9e3395660beeb319a913dd91648822a94033fc56d569f00ea12dd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5e795d5efe9e3395660beeb319a913dd91648822a94033fc56d569f00ea12dd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5e795d5efe9e3395660beeb319a913dd91648822a94033fc56d569f00ea12dd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-014065",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-014065/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-014065",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-014065",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-014065",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2366f76f50746089ddf80f87abc9f131724ad0cbdf337794fed5772b35677fe5",
	            "SandboxKey": "/var/run/docker/netns/2366f76f5074",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-014065": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "5628bbc7652a15748e66b46ffe1774aed037ef9be513caab37abce93f2de8a59",
	                    "EndpointID": "1a0e629a9588566a74c3a22b9431a54d443b79b273903fb5d604cd92557c3516",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-014065",
	                        "452df2b37af5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-014065 -n old-k8s-version-014065
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-014065 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-014065 logs -n 25: (1.482671767s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p kubenet-128493 sudo                                 | kubenet-128493         | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:40 UTC | 16 Apr 24 00:40 UTC |
	|         | containerd config dump                                 |                        |         |                |                     |                     |
	| ssh     | -p kubenet-128493 sudo                                 | kubenet-128493         | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:40 UTC |                     |
	|         | systemctl status crio --all                            |                        |         |                |                     |                     |
	|         | --full --no-pager                                      |                        |         |                |                     |                     |
	| ssh     | -p kubenet-128493 sudo                                 | kubenet-128493         | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:40 UTC | 16 Apr 24 00:40 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |                |                     |                     |
	| ssh     | -p kubenet-128493 sudo find                            | kubenet-128493         | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:40 UTC | 16 Apr 24 00:40 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |                |                     |                     |
	| ssh     | -p kubenet-128493 sudo crio                            | kubenet-128493         | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:40 UTC | 16 Apr 24 00:40 UTC |
	|         | config                                                 |                        |         |                |                     |                     |
	| delete  | -p kubenet-128493                                      | kubenet-128493         | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:40 UTC | 16 Apr 24 00:40 UTC |
	| start   | -p no-preload-140300                                   | no-preload-140300      | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:40 UTC | 16 Apr 24 00:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                        |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |                |                     |                     |
	|         | --driver=docker                                        |                        |         |                |                     |                     |
	|         | --container-runtime=docker                             |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                        |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-140300             | no-preload-140300      | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:41 UTC | 16 Apr 24 00:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |                |                     |                     |
	| stop    | -p no-preload-140300                                   | no-preload-140300      | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:41 UTC | 16 Apr 24 00:41 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-140300                  | no-preload-140300      | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:41 UTC | 16 Apr 24 00:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |                |                     |                     |
	| start   | -p no-preload-140300                                   | no-preload-140300      | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:41 UTC | 16 Apr 24 00:46 UTC |
	|         | --memory=2200 --alsologtostderr                        |                        |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |                |                     |                     |
	|         | --driver=docker                                        |                        |         |                |                     |                     |
	|         | --container-runtime=docker                             |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                        |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-014065        | old-k8s-version-014065 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:42 UTC | 16 Apr 24 00:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |                |                     |                     |
	| stop    | -p old-k8s-version-014065                              | old-k8s-version-014065 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:42 UTC | 16 Apr 24 00:42 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-014065             | old-k8s-version-014065 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:42 UTC | 16 Apr 24 00:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |                |                     |                     |
	| start   | -p old-k8s-version-014065                              | old-k8s-version-014065 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:42 UTC |                     |
	|         | --memory=2200                                          |                        |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |                |                     |                     |
	|         | --kvm-network=default                                  |                        |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |                |                     |                     |
	|         | --keep-context=false                                   |                        |         |                |                     |                     |
	|         | --driver=docker                                        |                        |         |                |                     |                     |
	|         | --container-runtime=docker                             |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |                |                     |                     |
	| image   | no-preload-140300 image list                           | no-preload-140300      | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:46 UTC | 16 Apr 24 00:46 UTC |
	|         | --format=json                                          |                        |         |                |                     |                     |
	| pause   | -p no-preload-140300                                   | no-preload-140300      | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:46 UTC | 16 Apr 24 00:46 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |                |                     |                     |
	| unpause | -p no-preload-140300                                   | no-preload-140300      | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:46 UTC | 16 Apr 24 00:46 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |                |                     |                     |
	| delete  | -p no-preload-140300                                   | no-preload-140300      | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:46 UTC | 16 Apr 24 00:46 UTC |
	| delete  | -p no-preload-140300                                   | no-preload-140300      | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:46 UTC | 16 Apr 24 00:46 UTC |
	| start   | -p embed-certs-534050                                  | embed-certs-534050     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:46 UTC | 16 Apr 24 00:47 UTC |
	|         | --memory=2200                                          |                        |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |                |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |                |                     |                     |
	|         |  --container-runtime=docker                            |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                        |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-534050            | embed-certs-534050     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC | 16 Apr 24 00:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |                |                     |                     |
	| stop    | -p embed-certs-534050                                  | embed-certs-534050     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC | 16 Apr 24 00:47 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-534050                 | embed-certs-534050     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC | 16 Apr 24 00:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |                |                     |                     |
	| start   | -p embed-certs-534050                                  | embed-certs-534050     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC |                     |
	|         | --memory=2200                                          |                        |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |                |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |                |                     |                     |
	|         |  --container-runtime=docker                            |                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                        |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:47:52
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:47:52.169928  381606 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:47:52.170078  381606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:47:52.170088  381606 out.go:304] Setting ErrFile to fd 2...
	I0416 00:47:52.170094  381606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:47:52.170419  381606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0416 00:47:52.170957  381606 out.go:298] Setting JSON to false
	I0416 00:47:52.172162  381606 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5408,"bootTime":1713223065,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0416 00:47:52.172258  381606 start.go:139] virtualization:  
	I0416 00:47:52.176023  381606 out.go:177] * [embed-certs-534050] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0416 00:47:52.178365  381606 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:47:52.180277  381606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:47:52.178433  381606 notify.go:220] Checking for updates...
	I0416 00:47:52.184518  381606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0416 00:47:52.186737  381606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	I0416 00:47:52.188750  381606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0416 00:47:52.190857  381606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:47:52.193394  381606 config.go:182] Loaded profile config "embed-certs-534050": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 00:47:52.193917  381606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:47:52.214700  381606 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0416 00:47:52.214893  381606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0416 00:47:52.280238  381606 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-16 00:47:52.262737594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0416 00:47:52.280350  381606 docker.go:295] overlay module found
	I0416 00:47:52.283601  381606 out.go:177] * Using the docker driver based on existing profile
	I0416 00:47:52.285713  381606 start.go:297] selected driver: docker
	I0416 00:47:52.285734  381606 start.go:901] validating driver "docker" against &{Name:embed-certs-534050 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-534050 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:47:52.285849  381606 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:47:52.286491  381606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0416 00:47:52.340629  381606 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-16 00:47:52.329683207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0416 00:47:52.341013  381606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:47:52.341074  381606 cni.go:84] Creating CNI manager for ""
	I0416 00:47:52.341095  381606 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0416 00:47:52.341148  381606 start.go:340] cluster config:
	{Name:embed-certs-534050 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-534050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:47:52.344457  381606 out.go:177] * Starting "embed-certs-534050" primary control-plane node in "embed-certs-534050" cluster
	I0416 00:47:52.346204  381606 cache.go:121] Beginning downloading kic base image for docker with docker
	I0416 00:47:52.348384  381606 out.go:177] * Pulling base image v0.0.43-1713215244-18647 ...
	I0416 00:47:52.350523  381606 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 00:47:52.350575  381606 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0416 00:47:52.350583  381606 cache.go:56] Caching tarball of preloaded images
	I0416 00:47:52.350671  381606 preload.go:173] Found /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0416 00:47:52.350689  381606 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0416 00:47:52.350816  381606 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/embed-certs-534050/config.json ...
	I0416 00:47:52.351046  381606 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0416 00:47:52.367593  381606 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon, skipping pull
	I0416 00:47:52.367620  381606 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in daemon, skipping load
	I0416 00:47:52.367643  381606 cache.go:194] Successfully downloaded all kic artifacts
	I0416 00:47:52.367679  381606 start.go:360] acquireMachinesLock for embed-certs-534050: {Name:mke7c4d31de3c5169ff1da2f0f005f363b817435 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:47:52.367749  381606 start.go:364] duration metric: took 43.569µs to acquireMachinesLock for "embed-certs-534050"
	I0416 00:47:52.367774  381606 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:47:52.367780  381606 fix.go:54] fixHost starting: 
	I0416 00:47:52.368106  381606 cli_runner.go:164] Run: docker container inspect embed-certs-534050 --format={{.State.Status}}
	I0416 00:47:52.383637  381606 fix.go:112] recreateIfNeeded on embed-certs-534050: state=Stopped err=<nil>
	W0416 00:47:52.383676  381606 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:47:52.386107  381606 out.go:177] * Restarting existing docker container for "embed-certs-534050" ...
	I0416 00:47:52.052167  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:54.550341  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:52.388448  381606 cli_runner.go:164] Run: docker start embed-certs-534050
	I0416 00:47:52.750077  381606 cli_runner.go:164] Run: docker container inspect embed-certs-534050 --format={{.State.Status}}
	I0416 00:47:52.769708  381606 kic.go:430] container "embed-certs-534050" state is running.
	I0416 00:47:52.770094  381606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-534050
	I0416 00:47:52.792735  381606 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/embed-certs-534050/config.json ...
	I0416 00:47:52.792968  381606 machine.go:94] provisionDockerMachine start ...
	I0416 00:47:52.793036  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:52.810656  381606 main.go:141] libmachine: Using SSH client type: native
	I0416 00:47:52.810941  381606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I0416 00:47:52.810957  381606 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:47:52.812468  381606 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46570->127.0.0.1:33147: read: connection reset by peer
	I0416 00:47:55.963509  381606 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-534050
	
	I0416 00:47:55.963577  381606 ubuntu.go:169] provisioning hostname "embed-certs-534050"
	I0416 00:47:55.963659  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:55.981352  381606 main.go:141] libmachine: Using SSH client type: native
	I0416 00:47:55.981625  381606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I0416 00:47:55.981643  381606 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-534050 && echo "embed-certs-534050" | sudo tee /etc/hostname
	I0416 00:47:56.144711  381606 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-534050
	
	I0416 00:47:56.144800  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:56.161425  381606 main.go:141] libmachine: Using SSH client type: native
	I0416 00:47:56.161682  381606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I0416 00:47:56.161704  381606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-534050' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-534050/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-534050' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:47:56.307437  381606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:47:56.307465  381606 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18647-2210/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-2210/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-2210/.minikube}
	I0416 00:47:56.307495  381606 ubuntu.go:177] setting up certificates
	I0416 00:47:56.307504  381606 provision.go:84] configureAuth start
	I0416 00:47:56.307565  381606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-534050
	I0416 00:47:56.323463  381606 provision.go:143] copyHostCerts
	I0416 00:47:56.323533  381606 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-2210/.minikube/key.pem, removing ...
	I0416 00:47:56.323543  381606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-2210/.minikube/key.pem
	I0416 00:47:56.323627  381606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-2210/.minikube/key.pem (1679 bytes)
	I0416 00:47:56.323733  381606 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-2210/.minikube/ca.pem, removing ...
	I0416 00:47:56.323738  381606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-2210/.minikube/ca.pem
	I0416 00:47:56.323765  381606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-2210/.minikube/ca.pem (1078 bytes)
	I0416 00:47:56.323820  381606 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-2210/.minikube/cert.pem, removing ...
	I0416 00:47:56.323824  381606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-2210/.minikube/cert.pem
	I0416 00:47:56.323848  381606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-2210/.minikube/cert.pem (1123 bytes)
	I0416 00:47:56.323893  381606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-2210/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca-key.pem org=jenkins.embed-certs-534050 san=[127.0.0.1 192.168.85.2 embed-certs-534050 localhost minikube]
	I0416 00:47:56.616169  381606 provision.go:177] copyRemoteCerts
	I0416 00:47:56.616280  381606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:47:56.616340  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:56.632887  381606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/embed-certs-534050/id_rsa Username:docker}
	I0416 00:47:56.740671  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0416 00:47:56.765705  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0416 00:47:56.791632  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:47:56.822113  381606 provision.go:87] duration metric: took 514.596532ms to configureAuth
	I0416 00:47:56.822138  381606 ubuntu.go:193] setting minikube options for container-runtime
	I0416 00:47:56.822337  381606 config.go:182] Loaded profile config "embed-certs-534050": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 00:47:56.822394  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:56.839620  381606 main.go:141] libmachine: Using SSH client type: native
	I0416 00:47:56.839884  381606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I0416 00:47:56.839899  381606 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0416 00:47:56.984216  381606 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0416 00:47:56.984238  381606 ubuntu.go:71] root file system type: overlay
	I0416 00:47:56.984376  381606 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0416 00:47:56.984451  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:57.000095  381606 main.go:141] libmachine: Using SSH client type: native
	I0416 00:47:57.000355  381606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I0416 00:47:57.000440  381606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0416 00:47:57.172406  381606 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0416 00:47:57.173738  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:57.191362  381606 main.go:141] libmachine: Using SSH client type: native
	I0416 00:47:57.191609  381606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I0416 00:47:57.191635  381606 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0416 00:47:57.346006  381606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:47:57.346034  381606 machine.go:97] duration metric: took 4.553049482s to provisionDockerMachine
	I0416 00:47:57.346045  381606 start.go:293] postStartSetup for "embed-certs-534050" (driver="docker")
	I0416 00:47:57.346057  381606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:47:57.346129  381606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:47:57.346176  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:57.362343  381606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/embed-certs-534050/id_rsa Username:docker}
	I0416 00:47:57.464168  381606 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:47:57.467394  381606 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0416 00:47:57.467434  381606 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0416 00:47:57.467445  381606 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0416 00:47:57.467451  381606 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0416 00:47:57.467461  381606 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-2210/.minikube/addons for local assets ...
	I0416 00:47:57.467524  381606 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-2210/.minikube/files for local assets ...
	I0416 00:47:57.467609  381606 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-2210/.minikube/files/etc/ssl/certs/75632.pem -> 75632.pem in /etc/ssl/certs
	I0416 00:47:57.467714  381606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:47:57.476443  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/files/etc/ssl/certs/75632.pem --> /etc/ssl/certs/75632.pem (1708 bytes)
	I0416 00:47:57.502190  381606 start.go:296] duration metric: took 156.129708ms for postStartSetup
	I0416 00:47:57.502274  381606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:47:57.502314  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:57.518741  381606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/embed-certs-534050/id_rsa Username:docker}
	I0416 00:47:57.616915  381606 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0416 00:47:57.621758  381606 fix.go:56] duration metric: took 5.253970982s for fixHost
	I0416 00:47:57.621782  381606 start.go:83] releasing machines lock for "embed-certs-534050", held for 5.254019999s
	I0416 00:47:57.621862  381606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-534050
	I0416 00:47:57.641710  381606 ssh_runner.go:195] Run: cat /version.json
	I0416 00:47:57.641758  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:57.642025  381606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:47:57.642082  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:47:57.660069  381606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/embed-certs-534050/id_rsa Username:docker}
	I0416 00:47:57.667521  381606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/embed-certs-534050/id_rsa Username:docker}
	I0416 00:47:57.880417  381606 ssh_runner.go:195] Run: systemctl --version
	I0416 00:47:57.885125  381606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 00:47:57.889499  381606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0416 00:47:57.909899  381606 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0416 00:47:57.910026  381606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:47:57.919939  381606 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 00:47:57.919965  381606 start.go:494] detecting cgroup driver to use...
	I0416 00:47:57.920022  381606 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0416 00:47:57.920144  381606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:47:57.941340  381606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0416 00:47:57.951709  381606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0416 00:47:57.961474  381606 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0416 00:47:57.961553  381606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0416 00:47:57.971619  381606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 00:47:57.982854  381606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0416 00:47:57.993417  381606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0416 00:47:58.005858  381606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:47:58.019395  381606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0416 00:47:58.031076  381606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0416 00:47:58.042587  381606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0416 00:47:58.054831  381606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:47:58.064755  381606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:47:58.074075  381606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:47:58.184876  381606 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0416 00:47:58.334388  381606 start.go:494] detecting cgroup driver to use...
	I0416 00:47:58.334442  381606 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0416 00:47:58.334514  381606 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0416 00:47:58.350505  381606 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0416 00:47:58.350578  381606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0416 00:47:58.365207  381606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:47:58.387174  381606 ssh_runner.go:195] Run: which cri-dockerd
	I0416 00:47:58.392559  381606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0416 00:47:58.443177  381606 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0416 00:47:58.510393  381606 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0416 00:47:58.644705  381606 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0416 00:47:58.759994  381606 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0416 00:47:58.760191  381606 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0416 00:47:58.790109  381606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:47:58.904942  381606 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0416 00:47:59.384619  381606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0416 00:47:59.397535  381606 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0416 00:47:59.410994  381606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 00:47:59.424338  381606 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0416 00:47:59.519576  381606 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0416 00:47:59.609678  381606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:47:59.716922  381606 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0416 00:47:59.732697  381606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0416 00:47:59.745390  381606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:47:59.838038  381606 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0416 00:47:59.921558  381606 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0416 00:47:59.921637  381606 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0416 00:47:59.925870  381606 start.go:562] Will wait 60s for crictl version
	I0416 00:47:59.925934  381606 ssh_runner.go:195] Run: which crictl
	I0416 00:47:59.930521  381606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:47:59.974451  381606 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.1
	RuntimeApiVersion:  v1
	I0416 00:47:59.974535  381606 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 00:47:59.995000  381606 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0416 00:47:56.550842  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:47:58.553866  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:00.566529  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:00.069740  381606 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.1 ...
	I0416 00:48:00.069856  381606 cli_runner.go:164] Run: docker network inspect embed-certs-534050 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0416 00:48:00.152217  381606 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0416 00:48:00.182338  381606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:48:00.218610  381606 kubeadm.go:877] updating cluster {Name:embed-certs-534050 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-534050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:48:00.218771  381606 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0416 00:48:00.218848  381606 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 00:48:00.265211  381606 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0416 00:48:00.265235  381606 docker.go:615] Images already preloaded, skipping extraction
	I0416 00:48:00.265313  381606 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0416 00:48:00.339184  381606 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0416 00:48:00.339252  381606 cache_images.go:84] Images are preloaded, skipping loading
	I0416 00:48:00.339266  381606 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.29.3 docker true true} ...
	I0416 00:48:00.339422  381606 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-534050 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-534050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:48:00.339552  381606 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0416 00:48:00.538835  381606 cni.go:84] Creating CNI manager for ""
	I0416 00:48:00.538867  381606 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0416 00:48:00.538879  381606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:48:00.538900  381606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-534050 NodeName:embed-certs-534050 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:48:00.539328  381606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-534050"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:48:00.539418  381606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 00:48:00.570533  381606 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:48:00.570626  381606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:48:00.587624  381606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0416 00:48:00.623623  381606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 00:48:00.668285  381606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0416 00:48:00.691499  381606 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0416 00:48:00.697919  381606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:48:00.711437  381606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:48:00.810658  381606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:48:00.826240  381606 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/embed-certs-534050 for IP: 192.168.85.2
	I0416 00:48:00.826266  381606 certs.go:194] generating shared ca certs ...
	I0416 00:48:00.826286  381606 certs.go:226] acquiring lock for ca certs: {Name:mk0f2c276f9ccc821c50906b5561fa26a27a6ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:48:00.826427  381606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-2210/.minikube/ca.key
	I0416 00:48:00.826474  381606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.key
	I0416 00:48:00.826485  381606 certs.go:256] generating profile certs ...
	I0416 00:48:00.826578  381606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/embed-certs-534050/client.key
	I0416 00:48:00.826670  381606 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/embed-certs-534050/apiserver.key.4028827e
	I0416 00:48:00.826748  381606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/embed-certs-534050/proxy-client.key
	I0416 00:48:00.826864  381606 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/7563.pem (1338 bytes)
	W0416 00:48:00.826896  381606 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-2210/.minikube/certs/7563_empty.pem, impossibly tiny 0 bytes
	I0416 00:48:00.826917  381606 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:48:00.826949  381606 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/ca.pem (1078 bytes)
	I0416 00:48:00.826979  381606 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:48:00.827007  381606 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/certs/key.pem (1679 bytes)
	I0416 00:48:00.827055  381606 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-2210/.minikube/files/etc/ssl/certs/75632.pem (1708 bytes)
	I0416 00:48:00.827783  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:48:00.864291  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:48:00.894741  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:48:00.926271  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0416 00:48:00.963387  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/embed-certs-534050/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0416 00:48:00.998201  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/embed-certs-534050/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 00:48:01.035005  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/embed-certs-534050/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:48:01.073085  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/embed-certs-534050/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 00:48:01.101296  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/files/etc/ssl/certs/75632.pem --> /usr/share/ca-certificates/75632.pem (1708 bytes)
	I0416 00:48:01.157486  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:48:01.187316  381606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-2210/.minikube/certs/7563.pem --> /usr/share/ca-certificates/7563.pem (1338 bytes)
	I0416 00:48:01.224986  381606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:48:01.247624  381606 ssh_runner.go:195] Run: openssl version
	I0416 00:48:01.255879  381606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75632.pem && ln -fs /usr/share/ca-certificates/75632.pem /etc/ssl/certs/75632.pem"
	I0416 00:48:01.270235  381606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75632.pem
	I0416 00:48:01.274707  381606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:43 /usr/share/ca-certificates/75632.pem
	I0416 00:48:01.274795  381606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75632.pem
	I0416 00:48:01.283107  381606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75632.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:48:01.294290  381606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:48:01.305451  381606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:48:01.309509  381606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:48:01.309663  381606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:48:01.317179  381606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:48:01.328021  381606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7563.pem && ln -fs /usr/share/ca-certificates/7563.pem /etc/ssl/certs/7563.pem"
	I0416 00:48:01.337904  381606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7563.pem
	I0416 00:48:01.343167  381606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:43 /usr/share/ca-certificates/7563.pem
	I0416 00:48:01.343266  381606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7563.pem
	I0416 00:48:01.350761  381606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7563.pem /etc/ssl/certs/51391683.0"
	I0416 00:48:01.361337  381606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:48:01.365278  381606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:48:01.372452  381606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:48:01.379792  381606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:48:01.386734  381606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:48:01.393998  381606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:48:01.401421  381606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:48:01.408660  381606 kubeadm.go:391] StartCluster: {Name:embed-certs-534050 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-534050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:48:01.408847  381606 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0416 00:48:01.425623  381606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 00:48:01.435555  381606 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 00:48:01.435575  381606 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 00:48:01.435581  381606 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 00:48:01.435648  381606 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 00:48:01.444915  381606 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:48:01.445548  381606 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-534050" does not appear in /home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0416 00:48:01.445847  381606 kubeconfig.go:62] /home/jenkins/minikube-integration/18647-2210/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-534050" cluster setting kubeconfig missing "embed-certs-534050" context setting]
	I0416 00:48:01.446320  381606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/kubeconfig: {Name:mk2a4b2f2d98970b43b7e481fd26cc76bda92838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:48:01.447723  381606 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 00:48:01.458144  381606 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0416 00:48:01.458175  381606 kubeadm.go:591] duration metric: took 22.588873ms to restartPrimaryControlPlane
	I0416 00:48:01.458184  381606 kubeadm.go:393] duration metric: took 49.533039ms to StartCluster
	I0416 00:48:01.458199  381606 settings.go:142] acquiring lock: {Name:mkad41a04993d6fe82f2e16230c6052d1c68b809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:48:01.458256  381606 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0416 00:48:01.459621  381606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/kubeconfig: {Name:mk2a4b2f2d98970b43b7e481fd26cc76bda92838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:48:01.459852  381606 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0416 00:48:01.462416  381606 out.go:177] * Verifying Kubernetes components...
	I0416 00:48:01.460143  381606 config.go:182] Loaded profile config "embed-certs-534050": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 00:48:01.460165  381606 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 00:48:01.464416  381606 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-534050"
	I0416 00:48:01.464466  381606 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-534050"
	W0416 00:48:01.464478  381606 addons.go:243] addon storage-provisioner should already be in state true
	I0416 00:48:01.464506  381606 host.go:66] Checking if "embed-certs-534050" exists ...
	I0416 00:48:01.464548  381606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:48:01.464670  381606 addons.go:69] Setting dashboard=true in profile "embed-certs-534050"
	I0416 00:48:01.464710  381606 addons.go:234] Setting addon dashboard=true in "embed-certs-534050"
	W0416 00:48:01.464732  381606 addons.go:243] addon dashboard should already be in state true
	I0416 00:48:01.464789  381606 host.go:66] Checking if "embed-certs-534050" exists ...
	I0416 00:48:01.464966  381606 cli_runner.go:164] Run: docker container inspect embed-certs-534050 --format={{.State.Status}}
	I0416 00:48:01.465286  381606 cli_runner.go:164] Run: docker container inspect embed-certs-534050 --format={{.State.Status}}
	I0416 00:48:01.467297  381606 addons.go:69] Setting default-storageclass=true in profile "embed-certs-534050"
	I0416 00:48:01.467341  381606 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-534050"
	I0416 00:48:01.467582  381606 addons.go:69] Setting metrics-server=true in profile "embed-certs-534050"
	I0416 00:48:01.467613  381606 addons.go:234] Setting addon metrics-server=true in "embed-certs-534050"
	W0416 00:48:01.467620  381606 addons.go:243] addon metrics-server should already be in state true
	I0416 00:48:01.467641  381606 cli_runner.go:164] Run: docker container inspect embed-certs-534050 --format={{.State.Status}}
	I0416 00:48:01.467649  381606 host.go:66] Checking if "embed-certs-534050" exists ...
	I0416 00:48:01.468021  381606 cli_runner.go:164] Run: docker container inspect embed-certs-534050 --format={{.State.Status}}
	I0416 00:48:01.497781  381606 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:48:01.500318  381606 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 00:48:01.500337  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 00:48:01.500402  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:48:01.518514  381606 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0416 00:48:01.522460  381606 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0416 00:48:01.538938  381606 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0416 00:48:01.538960  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0416 00:48:01.538892  381606 addons.go:234] Setting addon default-storageclass=true in "embed-certs-534050"
	W0416 00:48:01.539021  381606 addons.go:243] addon default-storageclass should already be in state true
	I0416 00:48:01.539053  381606 host.go:66] Checking if "embed-certs-534050" exists ...
	I0416 00:48:01.539128  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:48:01.539576  381606 cli_runner.go:164] Run: docker container inspect embed-certs-534050 --format={{.State.Status}}
	I0416 00:48:01.554141  381606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/embed-certs-534050/id_rsa Username:docker}
	I0416 00:48:01.585309  381606 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 00:48:01.588778  381606 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 00:48:01.588803  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 00:48:01.588867  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:48:01.584948  381606 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 00:48:01.590941  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 00:48:01.591011  381606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-534050
	I0416 00:48:01.593110  381606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/embed-certs-534050/id_rsa Username:docker}
	I0416 00:48:01.613682  381606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/embed-certs-534050/id_rsa Username:docker}
	I0416 00:48:01.634951  381606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/embed-certs-534050/id_rsa Username:docker}
	I0416 00:48:01.683465  381606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:48:01.764147  381606 node_ready.go:35] waiting up to 6m0s for node "embed-certs-534050" to be "Ready" ...
	I0416 00:48:01.843616  381606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 00:48:01.929818  381606 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 00:48:01.929846  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 00:48:02.008552  381606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 00:48:02.015677  381606 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 00:48:02.015751  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 00:48:02.112166  381606 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0416 00:48:02.112193  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0416 00:48:03.055439  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:05.549612  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:02.189623  381606 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0416 00:48:02.189650  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0416 00:48:02.214821  381606 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0416 00:48:02.214848  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0416 00:48:02.236824  381606 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0416 00:48:02.236849  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0416 00:48:02.307093  381606 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 00:48:02.307120  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 00:48:02.380986  381606 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0416 00:48:02.381012  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0416 00:48:02.624805  381606 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0416 00:48:02.624833  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0416 00:48:02.635417  381606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 00:48:02.702812  381606 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0416 00:48:02.702848  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0416 00:48:02.928190  381606 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0416 00:48:02.928218  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0416 00:48:02.999353  381606 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0416 00:48:02.999394  381606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0416 00:48:03.086567  381606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0416 00:48:06.865089  381606 node_ready.go:49] node "embed-certs-534050" has status "Ready":"True"
	I0416 00:48:06.865124  381606 node_ready.go:38] duration metric: took 5.100926585s for node "embed-certs-534050" to be "Ready" ...
	I0416 00:48:06.865135  381606 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 00:48:06.945474  381606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-t2nr8" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.010637  381606 pod_ready.go:92] pod "coredns-76f75df574-t2nr8" in "kube-system" namespace has status "Ready":"True"
	I0416 00:48:07.010676  381606 pod_ready.go:81] duration metric: took 65.160973ms for pod "coredns-76f75df574-t2nr8" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.010689  381606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-534050" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.045010  381606 pod_ready.go:92] pod "etcd-embed-certs-534050" in "kube-system" namespace has status "Ready":"True"
	I0416 00:48:07.045038  381606 pod_ready.go:81] duration metric: took 34.341314ms for pod "etcd-embed-certs-534050" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.045049  381606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-534050" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.068197  381606 pod_ready.go:92] pod "kube-apiserver-embed-certs-534050" in "kube-system" namespace has status "Ready":"True"
	I0416 00:48:07.068224  381606 pod_ready.go:81] duration metric: took 23.168241ms for pod "kube-apiserver-embed-certs-534050" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.068244  381606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-534050" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.081222  381606 pod_ready.go:92] pod "kube-controller-manager-embed-certs-534050" in "kube-system" namespace has status "Ready":"True"
	I0416 00:48:07.081258  381606 pod_ready.go:81] duration metric: took 13.005681ms for pod "kube-controller-manager-embed-certs-534050" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.081272  381606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l8zwp" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.095266  381606 pod_ready.go:92] pod "kube-proxy-l8zwp" in "kube-system" namespace has status "Ready":"True"
	I0416 00:48:07.095303  381606 pod_ready.go:81] duration metric: took 14.014488ms for pod "kube-proxy-l8zwp" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.095319  381606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-534050" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:08.050814  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:10.051128  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:07.474200  381606 pod_ready.go:92] pod "kube-scheduler-embed-certs-534050" in "kube-system" namespace has status "Ready":"True"
	I0416 00:48:07.474227  381606 pod_ready.go:81] duration metric: took 378.899879ms for pod "kube-scheduler-embed-certs-534050" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:07.474239  381606 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace to be "Ready" ...
	I0416 00:48:09.496681  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:10.821894  381606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.978238145s)
	I0416 00:48:10.821994  381606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.813416212s)
	W0416 00:48:10.822018  381606 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0416 00:48:10.822039  381606 retry.go:31] will retry after 176.459314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0416 00:48:10.822120  381606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.1866774s)
	I0416 00:48:10.822139  381606 addons.go:470] Verifying addon metrics-server=true in "embed-certs-534050"
	I0416 00:48:10.965393  381606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.878765967s)
	I0416 00:48:10.967898  381606 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-534050 addons enable metrics-server
	
	I0416 00:48:10.999120  381606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0416 00:48:11.258847  381606 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0416 00:48:11.261040  381606 addons.go:505] duration metric: took 9.800868584s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0416 00:48:11.981208  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:12.550061  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:15.053238  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:14.480700  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:16.481072  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:17.553163  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:20.050973  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:18.982184  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:20.982643  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:22.053952  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:24.549677  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:23.480851  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:25.481664  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:26.550323  369017 pod_ready.go:102] pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:27.050917  369017 pod_ready.go:81] duration metric: took 4m0.007077036s for pod "metrics-server-9975d5f86-8k8tv" in "kube-system" namespace to be "Ready" ...
	E0416 00:48:27.050943  369017 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0416 00:48:27.050953  369017 pod_ready.go:38] duration metric: took 5m26.131424924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 00:48:27.050972  369017 api_server.go:52] waiting for apiserver process to appear ...
	I0416 00:48:27.051054  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0416 00:48:27.073537  369017 logs.go:276] 2 containers: [a7d7845d2402 b8ea3fa2ab02]
	I0416 00:48:27.073627  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0416 00:48:27.092597  369017 logs.go:276] 2 containers: [33107d331e0b fd5230a8d74b]
	I0416 00:48:27.092711  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0416 00:48:27.123060  369017 logs.go:276] 2 containers: [65e7340af5ef 697870ff99a4]
	I0416 00:48:27.123158  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0416 00:48:27.141834  369017 logs.go:276] 2 containers: [2d7d1b9e8353 7b437d823755]
	I0416 00:48:27.141920  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0416 00:48:27.160491  369017 logs.go:276] 2 containers: [fa54eb276fa9 bf3ceb2acadb]
	I0416 00:48:27.160587  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0416 00:48:27.180907  369017 logs.go:276] 2 containers: [b3c3c455ea1c 4cc3ed1cf27e]
	I0416 00:48:27.180992  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0416 00:48:27.198507  369017 logs.go:276] 0 containers: []
	W0416 00:48:27.198530  369017 logs.go:278] No container was found matching "kindnet"
	I0416 00:48:27.198587  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0416 00:48:27.216044  369017 logs.go:276] 1 containers: [c311fb93e11b]
	I0416 00:48:27.216241  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0416 00:48:27.232689  369017 logs.go:276] 2 containers: [c5eb3f5fa95a d8e8f85e95c4]
	I0416 00:48:27.232722  369017 logs.go:123] Gathering logs for kubernetes-dashboard [c311fb93e11b] ...
	I0416 00:48:27.232735  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c311fb93e11b"
	I0416 00:48:27.254742  369017 logs.go:123] Gathering logs for kube-apiserver [a7d7845d2402] ...
	I0416 00:48:27.254816  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d7845d2402"
	I0416 00:48:27.299302  369017 logs.go:123] Gathering logs for coredns [65e7340af5ef] ...
	I0416 00:48:27.299340  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65e7340af5ef"
	I0416 00:48:27.327427  369017 logs.go:123] Gathering logs for kube-scheduler [7b437d823755] ...
	I0416 00:48:27.327461  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b437d823755"
	I0416 00:48:27.354603  369017 logs.go:123] Gathering logs for kube-proxy [bf3ceb2acadb] ...
	I0416 00:48:27.354636  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3ceb2acadb"
	I0416 00:48:27.377974  369017 logs.go:123] Gathering logs for storage-provisioner [c5eb3f5fa95a] ...
	I0416 00:48:27.378013  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5eb3f5fa95a"
	I0416 00:48:27.400538  369017 logs.go:123] Gathering logs for storage-provisioner [d8e8f85e95c4] ...
	I0416 00:48:27.400566  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8e8f85e95c4"
	I0416 00:48:27.421475  369017 logs.go:123] Gathering logs for container status ...
	I0416 00:48:27.421505  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 00:48:27.501109  369017 logs.go:123] Gathering logs for kubelet ...
	I0416 00:48:27.501142  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0416 00:48:27.578790  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:03 old-k8s-version-014065 kubelet[1223]: E0416 00:43:03.441058    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:27.580653  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:04 old-k8s-version-014065 kubelet[1223]: E0416 00:43:04.639437    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.583270  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:15 old-k8s-version-014065 kubelet[1223]: E0416 00:43:15.964300    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:27.592833  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:20 old-k8s-version-014065 kubelet[1223]: E0416 00:43:20.969599    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:27.593059  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:21 old-k8s-version-014065 kubelet[1223]: E0416 00:43:21.832458    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.594338  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:26 old-k8s-version-014065 kubelet[1223]: E0416 00:43:26.931292    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.597710  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:33 old-k8s-version-014065 kubelet[1223]: E0416 00:43:33.428287    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:27.598545  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:33 old-k8s-version-014065 kubelet[1223]: E0416 00:43:33.976063    1223 pod_workers.go:191] Error syncing pod cef2692c-ceee-4c9c-892a-75dcaae5ab8a ("storage-provisioner_kube-system(cef2692c-ceee-4c9c-892a-75dcaae5ab8a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cef2692c-ceee-4c9c-892a-75dcaae5ab8a)"
	W0416 00:48:27.600747  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:38 old-k8s-version-014065 kubelet[1223]: E0416 00:43:38.955013    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:27.601295  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:46 old-k8s-version-014065 kubelet[1223]: E0416 00:43:46.931311    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.601614  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:53 old-k8s-version-014065 kubelet[1223]: E0416 00:43:53.963232    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.604001  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:02 old-k8s-version-014065 kubelet[1223]: E0416 00:44:02.430723    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:27.604206  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:07 old-k8s-version-014065 kubelet[1223]: E0416 00:44:07.936331    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.604402  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:14 old-k8s-version-014065 kubelet[1223]: E0416 00:44:14.931692    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.606653  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:20 old-k8s-version-014065 kubelet[1223]: E0416 00:44:20.968241    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:27.606863  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:25 old-k8s-version-014065 kubelet[1223]: E0416 00:44:25.952535    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.607048  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:32 old-k8s-version-014065 kubelet[1223]: E0416 00:44:32.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.607278  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:40 old-k8s-version-014065 kubelet[1223]: E0416 00:44:40.931086    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.607466  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:46 old-k8s-version-014065 kubelet[1223]: E0416 00:44:46.931443    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.609801  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:54 old-k8s-version-014065 kubelet[1223]: E0416 00:44:54.366332    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:27.610002  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:01 old-k8s-version-014065 kubelet[1223]: E0416 00:45:01.931372    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.610273  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:08 old-k8s-version-014065 kubelet[1223]: E0416 00:45:08.933619    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.610493  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:12 old-k8s-version-014065 kubelet[1223]: E0416 00:45:12.931273    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.610681  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:23 old-k8s-version-014065 kubelet[1223]: E0416 00:45:23.931035    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.610917  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:23 old-k8s-version-014065 kubelet[1223]: E0416 00:45:23.953472    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.611117  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:35 old-k8s-version-014065 kubelet[1223]: E0416 00:45:35.932086    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.611363  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:38 old-k8s-version-014065 kubelet[1223]: E0416 00:45:38.937563    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.613506  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:49 old-k8s-version-014065 kubelet[1223]: E0416 00:45:49.959737    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:27.613707  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:50 old-k8s-version-014065 kubelet[1223]: E0416 00:45:50.931084    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.613892  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:02 old-k8s-version-014065 kubelet[1223]: E0416 00:46:02.934939    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.614087  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:04 old-k8s-version-014065 kubelet[1223]: E0416 00:46:04.931137    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.614275  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:14 old-k8s-version-014065 kubelet[1223]: E0416 00:46:14.942668    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.616488  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:16 old-k8s-version-014065 kubelet[1223]: E0416 00:46:16.403347    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:27.616677  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:27 old-k8s-version-014065 kubelet[1223]: E0416 00:46:27.930989    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.616873  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:30 old-k8s-version-014065 kubelet[1223]: E0416 00:46:30.931502    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.617084  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:40 old-k8s-version-014065 kubelet[1223]: E0416 00:46:40.931581    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.617280  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:45 old-k8s-version-014065 kubelet[1223]: E0416 00:46:45.971319    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.617464  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:53 old-k8s-version-014065 kubelet[1223]: E0416 00:46:53.934295    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.617670  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:56 old-k8s-version-014065 kubelet[1223]: E0416 00:46:56.930865    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.617855  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:05 old-k8s-version-014065 kubelet[1223]: E0416 00:47:05.931075    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.618050  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:10 old-k8s-version-014065 kubelet[1223]: E0416 00:47:10.931300    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.618235  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:20 old-k8s-version-014065 kubelet[1223]: E0416 00:47:20.932206    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.618431  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:25 old-k8s-version-014065 kubelet[1223]: E0416 00:47:25.931180    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.618616  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:31 old-k8s-version-014065 kubelet[1223]: E0416 00:47:31.931916    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.618840  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:37 old-k8s-version-014065 kubelet[1223]: E0416 00:47:37.931161    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.619026  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:44 old-k8s-version-014065 kubelet[1223]: E0416 00:47:44.931376    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.619351  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:48 old-k8s-version-014065 kubelet[1223]: E0416 00:47:48.937314    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.619548  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:58 old-k8s-version-014065 kubelet[1223]: E0416 00:47:58.931861    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.619745  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.619929  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.620123  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:27.620308  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0416 00:48:27.620319  369017 logs.go:123] Gathering logs for kube-apiserver [b8ea3fa2ab02] ...
	I0416 00:48:27.620334  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ea3fa2ab02"
	I0416 00:48:27.698206  369017 logs.go:123] Gathering logs for coredns [697870ff99a4] ...
	I0416 00:48:27.698250  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697870ff99a4"
	I0416 00:48:27.720130  369017 logs.go:123] Gathering logs for kube-scheduler [2d7d1b9e8353] ...
	I0416 00:48:27.720164  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7d1b9e8353"
	I0416 00:48:27.741457  369017 logs.go:123] Gathering logs for dmesg ...
	I0416 00:48:27.741485  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 00:48:27.764244  369017 logs.go:123] Gathering logs for describe nodes ...
	I0416 00:48:27.764279  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0416 00:48:27.930626  369017 logs.go:123] Gathering logs for kube-proxy [fa54eb276fa9] ...
	I0416 00:48:27.930654  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa54eb276fa9"
	I0416 00:48:27.962973  369017 logs.go:123] Gathering logs for Docker ...
	I0416 00:48:27.963046  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0416 00:48:27.999906  369017 logs.go:123] Gathering logs for etcd [33107d331e0b] ...
	I0416 00:48:27.999940  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33107d331e0b"
	I0416 00:48:28.028776  369017 logs.go:123] Gathering logs for etcd [fd5230a8d74b] ...
	I0416 00:48:28.028817  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd5230a8d74b"
	I0416 00:48:28.064939  369017 logs.go:123] Gathering logs for kube-controller-manager [b3c3c455ea1c] ...
	I0416 00:48:28.064970  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c3c455ea1c"
	I0416 00:48:28.117348  369017 logs.go:123] Gathering logs for kube-controller-manager [4cc3ed1cf27e] ...
	I0416 00:48:28.117379  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cc3ed1cf27e"
	I0416 00:48:28.175750  369017 out.go:304] Setting ErrFile to fd 2...
	I0416 00:48:28.175783  369017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0416 00:48:28.175846  369017 out.go:239] X Problems detected in kubelet:
	W0416 00:48:28.175861  369017 out.go:239]   Apr 16 00:47:58 old-k8s-version-014065 kubelet[1223]: E0416 00:47:58.931861    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:28.175875  369017 out.go:239]   Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:28.175885  369017 out.go:239]   Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:28.175896  369017 out.go:239]   Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:28.175904  369017 out.go:239]   Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0416 00:48:28.175912  369017 out.go:304] Setting ErrFile to fd 2...
	I0416 00:48:28.175920  369017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:48:27.981703  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:29.983323  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:32.480320  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:34.481646  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:36.983474  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:38.176694  369017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:48:38.191932  369017 api_server.go:72] duration metric: took 5m49.421692349s to wait for apiserver process to appear ...
	I0416 00:48:38.191963  369017 api_server.go:88] waiting for apiserver healthz status ...
	I0416 00:48:38.192051  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0416 00:48:38.209623  369017 logs.go:276] 2 containers: [a7d7845d2402 b8ea3fa2ab02]
	I0416 00:48:38.209701  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0416 00:48:38.228837  369017 logs.go:276] 2 containers: [33107d331e0b fd5230a8d74b]
	I0416 00:48:38.228918  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0416 00:48:38.245852  369017 logs.go:276] 2 containers: [65e7340af5ef 697870ff99a4]
	I0416 00:48:38.245943  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0416 00:48:38.268054  369017 logs.go:276] 2 containers: [2d7d1b9e8353 7b437d823755]
	I0416 00:48:38.268136  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0416 00:48:38.292495  369017 logs.go:276] 2 containers: [fa54eb276fa9 bf3ceb2acadb]
	I0416 00:48:38.292571  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0416 00:48:38.314076  369017 logs.go:276] 2 containers: [b3c3c455ea1c 4cc3ed1cf27e]
	I0416 00:48:38.314160  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0416 00:48:38.330761  369017 logs.go:276] 0 containers: []
	W0416 00:48:38.330781  369017 logs.go:278] No container was found matching "kindnet"
	I0416 00:48:38.330834  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0416 00:48:38.349329  369017 logs.go:276] 2 containers: [c5eb3f5fa95a d8e8f85e95c4]
	I0416 00:48:38.349479  369017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0416 00:48:38.366973  369017 logs.go:276] 1 containers: [c311fb93e11b]
	I0416 00:48:38.367003  369017 logs.go:123] Gathering logs for kube-apiserver [b8ea3fa2ab02] ...
	I0416 00:48:38.367015  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8ea3fa2ab02"
	I0416 00:48:38.422460  369017 logs.go:123] Gathering logs for etcd [33107d331e0b] ...
	I0416 00:48:38.422494  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33107d331e0b"
	I0416 00:48:38.459188  369017 logs.go:123] Gathering logs for kube-scheduler [2d7d1b9e8353] ...
	I0416 00:48:38.459287  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d7d1b9e8353"
	I0416 00:48:38.488818  369017 logs.go:123] Gathering logs for kube-controller-manager [b3c3c455ea1c] ...
	I0416 00:48:38.488856  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c3c455ea1c"
	I0416 00:48:38.528135  369017 logs.go:123] Gathering logs for kubelet ...
	I0416 00:48:38.528170  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0416 00:48:38.584942  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:03 old-k8s-version-014065 kubelet[1223]: E0416 00:43:03.441058    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:38.586673  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:04 old-k8s-version-014065 kubelet[1223]: E0416 00:43:04.639437    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.589065  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:15 old-k8s-version-014065 kubelet[1223]: E0416 00:43:15.964300    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:38.593156  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:20 old-k8s-version-014065 kubelet[1223]: E0416 00:43:20.969599    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:38.593356  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:21 old-k8s-version-014065 kubelet[1223]: E0416 00:43:21.832458    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.594043  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:26 old-k8s-version-014065 kubelet[1223]: E0416 00:43:26.931292    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.596270  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:33 old-k8s-version-014065 kubelet[1223]: E0416 00:43:33.428287    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:38.597041  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:33 old-k8s-version-014065 kubelet[1223]: E0416 00:43:33.976063    1223 pod_workers.go:191] Error syncing pod cef2692c-ceee-4c9c-892a-75dcaae5ab8a ("storage-provisioner_kube-system(cef2692c-ceee-4c9c-892a-75dcaae5ab8a)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cef2692c-ceee-4c9c-892a-75dcaae5ab8a)"
	W0416 00:48:38.599100  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:38 old-k8s-version-014065 kubelet[1223]: E0416 00:43:38.955013    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:38.599665  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:46 old-k8s-version-014065 kubelet[1223]: E0416 00:43:46.931311    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.599979  369017 logs.go:138] Found kubelet problem: Apr 16 00:43:53 old-k8s-version-014065 kubelet[1223]: E0416 00:43:53.963232    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.602177  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:02 old-k8s-version-014065 kubelet[1223]: E0416 00:44:02.430723    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:38.602364  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:07 old-k8s-version-014065 kubelet[1223]: E0416 00:44:07.936331    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.602559  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:14 old-k8s-version-014065 kubelet[1223]: E0416 00:44:14.931692    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.604596  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:20 old-k8s-version-014065 kubelet[1223]: E0416 00:44:20.968241    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:38.604793  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:25 old-k8s-version-014065 kubelet[1223]: E0416 00:44:25.952535    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.604977  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:32 old-k8s-version-014065 kubelet[1223]: E0416 00:44:32.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.605172  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:40 old-k8s-version-014065 kubelet[1223]: E0416 00:44:40.931086    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.605356  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:46 old-k8s-version-014065 kubelet[1223]: E0416 00:44:46.931443    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.607588  369017 logs.go:138] Found kubelet problem: Apr 16 00:44:54 old-k8s-version-014065 kubelet[1223]: E0416 00:44:54.366332    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:38.607775  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:01 old-k8s-version-014065 kubelet[1223]: E0416 00:45:01.931372    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.607984  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:08 old-k8s-version-014065 kubelet[1223]: E0416 00:45:08.933619    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.608167  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:12 old-k8s-version-014065 kubelet[1223]: E0416 00:45:12.931273    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.608349  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:23 old-k8s-version-014065 kubelet[1223]: E0416 00:45:23.931035    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.608545  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:23 old-k8s-version-014065 kubelet[1223]: E0416 00:45:23.953472    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.608728  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:35 old-k8s-version-014065 kubelet[1223]: E0416 00:45:35.932086    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.608930  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:38 old-k8s-version-014065 kubelet[1223]: E0416 00:45:38.937563    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.610973  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:49 old-k8s-version-014065 kubelet[1223]: E0416 00:45:49.959737    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0416 00:48:38.611166  369017 logs.go:138] Found kubelet problem: Apr 16 00:45:50 old-k8s-version-014065 kubelet[1223]: E0416 00:45:50.931084    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.611382  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:02 old-k8s-version-014065 kubelet[1223]: E0416 00:46:02.934939    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.611578  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:04 old-k8s-version-014065 kubelet[1223]: E0416 00:46:04.931137    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.611762  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:14 old-k8s-version-014065 kubelet[1223]: E0416 00:46:14.942668    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.613954  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:16 old-k8s-version-014065 kubelet[1223]: E0416 00:46:16.403347    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0416 00:48:38.614138  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:27 old-k8s-version-014065 kubelet[1223]: E0416 00:46:27.930989    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.614333  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:30 old-k8s-version-014065 kubelet[1223]: E0416 00:46:30.931502    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.614519  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:40 old-k8s-version-014065 kubelet[1223]: E0416 00:46:40.931581    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.614712  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:45 old-k8s-version-014065 kubelet[1223]: E0416 00:46:45.971319    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.614911  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:53 old-k8s-version-014065 kubelet[1223]: E0416 00:46:53.934295    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.615106  369017 logs.go:138] Found kubelet problem: Apr 16 00:46:56 old-k8s-version-014065 kubelet[1223]: E0416 00:46:56.930865    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.615296  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:05 old-k8s-version-014065 kubelet[1223]: E0416 00:47:05.931075    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.615490  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:10 old-k8s-version-014065 kubelet[1223]: E0416 00:47:10.931300    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.615673  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:20 old-k8s-version-014065 kubelet[1223]: E0416 00:47:20.932206    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.615866  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:25 old-k8s-version-014065 kubelet[1223]: E0416 00:47:25.931180    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.616049  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:31 old-k8s-version-014065 kubelet[1223]: E0416 00:47:31.931916    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.616243  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:37 old-k8s-version-014065 kubelet[1223]: E0416 00:47:37.931161    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.616448  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:44 old-k8s-version-014065 kubelet[1223]: E0416 00:47:44.931376    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.616644  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:48 old-k8s-version-014065 kubelet[1223]: E0416 00:47:48.937314    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.616827  369017 logs.go:138] Found kubelet problem: Apr 16 00:47:58 old-k8s-version-014065 kubelet[1223]: E0416 00:47:58.931861    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.617022  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.617204  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.617401  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.617586  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:38.617782  369017 logs.go:138] Found kubelet problem: Apr 16 00:48:33 old-k8s-version-014065 kubelet[1223]: E0416 00:48:33.935545    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0416 00:48:38.617792  369017 logs.go:123] Gathering logs for kube-apiserver [a7d7845d2402] ...
	I0416 00:48:38.617805  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7d7845d2402"
	I0416 00:48:38.659232  369017 logs.go:123] Gathering logs for coredns [65e7340af5ef] ...
	I0416 00:48:38.659348  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65e7340af5ef"
	I0416 00:48:38.688264  369017 logs.go:123] Gathering logs for kube-proxy [fa54eb276fa9] ...
	I0416 00:48:38.688293  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa54eb276fa9"
	I0416 00:48:38.708927  369017 logs.go:123] Gathering logs for kube-proxy [bf3ceb2acadb] ...
	I0416 00:48:38.708955  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3ceb2acadb"
	I0416 00:48:38.729344  369017 logs.go:123] Gathering logs for kube-controller-manager [4cc3ed1cf27e] ...
	I0416 00:48:38.729374  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cc3ed1cf27e"
	I0416 00:48:38.771163  369017 logs.go:123] Gathering logs for Docker ...
	I0416 00:48:38.771228  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0416 00:48:38.802002  369017 logs.go:123] Gathering logs for dmesg ...
	I0416 00:48:38.802041  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 00:48:38.822471  369017 logs.go:123] Gathering logs for describe nodes ...
	I0416 00:48:38.822500  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0416 00:48:38.978993  369017 logs.go:123] Gathering logs for container status ...
	I0416 00:48:38.979023  369017 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 00:48:39.074662  369017 logs.go:123] Gathering logs for coredns [697870ff99a4] ...
	I0416 00:48:39.074703  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 697870ff99a4"
	I0416 00:48:39.100511  369017 logs.go:123] Gathering logs for storage-provisioner [d8e8f85e95c4] ...
	I0416 00:48:39.100545  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8e8f85e95c4"
	I0416 00:48:39.122784  369017 logs.go:123] Gathering logs for storage-provisioner [c5eb3f5fa95a] ...
	I0416 00:48:39.122813  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5eb3f5fa95a"
	I0416 00:48:39.144632  369017 logs.go:123] Gathering logs for kubernetes-dashboard [c311fb93e11b] ...
	I0416 00:48:39.144662  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c311fb93e11b"
	I0416 00:48:39.166870  369017 logs.go:123] Gathering logs for etcd [fd5230a8d74b] ...
	I0416 00:48:39.166901  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd5230a8d74b"
	I0416 00:48:39.190433  369017 logs.go:123] Gathering logs for kube-scheduler [7b437d823755] ...
	I0416 00:48:39.190464  369017 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b437d823755"
	I0416 00:48:39.213048  369017 out.go:304] Setting ErrFile to fd 2...
	I0416 00:48:39.213073  369017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0416 00:48:39.213162  369017 out.go:239] X Problems detected in kubelet:
	W0416 00:48:39.213174  369017 out.go:239]   Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:39.213185  369017 out.go:239]   Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:39.213207  369017 out.go:239]   Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0416 00:48:39.213217  369017 out.go:239]   Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0416 00:48:39.213223  369017 out.go:239]   Apr 16 00:48:33 old-k8s-version-014065 kubelet[1223]: E0416 00:48:33.935545    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0416 00:48:39.213235  369017 out.go:304] Setting ErrFile to fd 2...
	I0416 00:48:39.213242  369017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:48:38.985116  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:41.482785  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:43.981777  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:45.981918  381606 pod_ready.go:102] pod "metrics-server-57f55c9bc5-4bqxl" in "kube-system" namespace has status "Ready":"False"
	I0416 00:48:49.213969  369017 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0416 00:48:49.227100  369017 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0416 00:48:49.230281  369017 out.go:177] 
	W0416 00:48:49.232609  369017 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0416 00:48:49.232667  369017 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0416 00:48:49.232693  369017 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0416 00:48:49.232702  369017 out.go:239] * 
	W0416 00:48:49.234077  369017 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 00:48:49.236416  369017 out.go:177] 
	
	
	==> Docker <==
	Apr 16 00:48:27 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:27 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:27 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:28 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:28 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:28 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:28 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:38 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:38 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:38 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:38 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:38 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:38 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:38 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:38 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:38 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:38 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:38 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:38 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:38 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:38 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:38 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:38 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:38 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:38 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:39 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:39 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:39 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:39 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:39 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:39 old-k8s-version-014065 dockerd[979]: 2024/04/16 00:48:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 16 00:48:39 old-k8s-version-014065 dockerd[979]: time="2024-04-16T00:48:39.956667271Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=fc7bb4fc2dbd3e7a traceID=50d125c714d09db8e72f91157c2cb944
	Apr 16 00:48:39 old-k8s-version-014065 dockerd[979]: time="2024-04-16T00:48:39.956725624Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=fc7bb4fc2dbd3e7a traceID=50d125c714d09db8e72f91157c2cb944
	Apr 16 00:48:39 old-k8s-version-014065 dockerd[979]: time="2024-04-16T00:48:39.960407120Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=fc7bb4fc2dbd3e7a traceID=50d125c714d09db8e72f91157c2cb944
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c5eb3f5fa95a3       ba04bb24b9575                                                                                         5 minutes ago       Running             storage-provisioner       3                   0623b27a16861       storage-provisioner
	c311fb93e11b4       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   d3e602322d94e       kubernetes-dashboard-cd95d586-gh7qv
	65e7340af5efc       db91994f4ee8f                                                                                         5 minutes ago       Running             coredns                   1                   a2fe98efc4317       coredns-74ff55c5b-ftt5t
	d8e8f85e95c4e       ba04bb24b9575                                                                                         5 minutes ago       Exited              storage-provisioner       2                   0623b27a16861       storage-provisioner
	0575ad4292326       1611cd07b61d5                                                                                         5 minutes ago       Running             busybox                   1                   fabd6d1c23420       busybox
	fa54eb276fa99       25a5233254979                                                                                         5 minutes ago       Running             kube-proxy                1                   e0384a5b397ea       kube-proxy-2ltgk
	33107d331e0b4       05b738aa1bc63                                                                                         5 minutes ago       Running             etcd                      1                   c6b4fd46b8277       etcd-old-k8s-version-014065
	a7d7845d2402e       2c08bbbc02d3a                                                                                         5 minutes ago       Running             kube-apiserver            1                   8b758f0778c42       kube-apiserver-old-k8s-version-014065
	2d7d1b9e83535       e7605f88f17d6                                                                                         5 minutes ago       Running             kube-scheduler            1                   7a01f79623eef       kube-scheduler-old-k8s-version-014065
	b3c3c455ea1c0       1df8a2b116bd1                                                                                         5 minutes ago       Running             kube-controller-manager   1                   629f506fab1f4       kube-controller-manager-old-k8s-version-014065
	5ce6cb9441727       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              busybox                   0                   82cf1fd86125c       busybox
	697870ff99a46       db91994f4ee8f                                                                                         8 minutes ago       Exited              coredns                   0                   5e996201801f3       coredns-74ff55c5b-ftt5t
	bf3ceb2acadb3       25a5233254979                                                                                         8 minutes ago       Exited              kube-proxy                0                   7cf84019e80dd       kube-proxy-2ltgk
	7b437d8237553       e7605f88f17d6                                                                                         8 minutes ago       Exited              kube-scheduler            0                   9d5b39c918112       kube-scheduler-old-k8s-version-014065
	4cc3ed1cf27ed       1df8a2b116bd1                                                                                         8 minutes ago       Exited              kube-controller-manager   0                   b4f27bb094b7c       kube-controller-manager-old-k8s-version-014065
	b8ea3fa2ab024       2c08bbbc02d3a                                                                                         8 minutes ago       Exited              kube-apiserver            0                   67fe14533f1ad       kube-apiserver-old-k8s-version-014065
	fd5230a8d74bd       05b738aa1bc63                                                                                         8 minutes ago       Exited              etcd                      0                   5d7b40ae19a43       etcd-old-k8s-version-014065
	
	
	==> coredns [65e7340af5ef] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:50154 - 53633 "HINFO IN 8120355963274842863.2253897167941298015. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031784472s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0416 00:43:33.816498       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-16 00:43:03.813011041 +0000 UTC m=+0.081684558) (total time: 30.001969867s):
	Trace[2019727887]: [30.001969867s] [30.001969867s] END
	E0416 00:43:33.816550       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0416 00:43:33.816506       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-16 00:43:03.812935629 +0000 UTC m=+0.081609155) (total time: 30.002064445s):
	Trace[1427131847]: [30.002064445s] [30.002064445s] END
	E0416 00:43:33.816579       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0416 00:43:33.816781       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-16 00:43:03.814435321 +0000 UTC m=+0.083108830) (total time: 30.002328891s):
	Trace[911902081]: [30.002328891s] [30.002328891s] END
	E0416 00:43:33.816794       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [697870ff99a4] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51946 - 12757 "HINFO IN 3475134916925666260.1098169330899469510. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.060809881s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	I0416 00:41:11.971146       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-16 00:40:41.97055784 +0000 UTC m=+0.054831162) (total time: 30.000489285s):
	Trace[2019727887]: [30.000489285s] [30.000489285s] END
	E0416 00:41:11.971188       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0416 00:41:11.971643       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-16 00:40:41.971144231 +0000 UTC m=+0.055417544) (total time: 30.000480096s):
	Trace[939984059]: [30.000480096s] [30.000480096s] END
	E0416 00:41:11.971658       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0416 00:41:11.971757       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-16 00:40:41.971397706 +0000 UTC m=+0.055671028) (total time: 30.000333448s):
	Trace[911902081]: [30.000333448s] [30.000333448s] END
	E0416 00:41:11.971766       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-014065
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-014065
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=old-k8s-version-014065
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T00_40_20_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 00:40:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-014065
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:48:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:43:51 +0000   Tue, 16 Apr 2024 00:40:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:43:51 +0000   Tue, 16 Apr 2024 00:40:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:43:51 +0000   Tue, 16 Apr 2024 00:40:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:43:51 +0000   Tue, 16 Apr 2024 00:40:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-014065
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 00e766457fe04ed39a5327d64fb0cf5f
	  System UUID:                132da16a-063e-4e5b-9748-6b3cf0c499e0
	  Boot ID:                    ed177bf7-a11f-466b-8935-e2b8479e05ab
	  Kernel Version:             5.15.0-1057-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.0.1
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 coredns-74ff55c5b-ftt5t                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m13s
	  kube-system                 etcd-old-k8s-version-014065                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-apiserver-old-k8s-version-014065             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-controller-manager-old-k8s-version-014065    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-proxy-2ltgk                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-scheduler-old-k8s-version-014065             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 metrics-server-9975d5f86-8k8tv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m21s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-6qzsp         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-gh7qv               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (4%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m42s (x5 over 8m42s)  kubelet     Node old-k8s-version-014065 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m42s (x5 over 8m42s)  kubelet     Node old-k8s-version-014065 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m42s (x4 over 8m42s)  kubelet     Node old-k8s-version-014065 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m26s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m26s                  kubelet     Node old-k8s-version-014065 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s                  kubelet     Node old-k8s-version-014065 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m26s                  kubelet     Node old-k8s-version-014065 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m15s                  kubelet     Node old-k8s-version-014065 status is now: NodeReady
	  Normal  Starting                 8m9s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m59s)  kubelet     Node old-k8s-version-014065 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x8 over 5m59s)  kubelet     Node old-k8s-version-014065 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x7 over 5m59s)  kubelet     Node old-k8s-version-014065 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000919] FS-Cache: N-cookie d=000000003a70b87a{9p.inode} n=0000000082e46d16
	[  +0.001014] FS-Cache: N-key=[8] '9a6ced0000000000'
	[  +0.002893] FS-Cache: Duplicate cookie detected
	[  +0.000699] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000945] FS-Cache: O-cookie d=000000003a70b87a{9p.inode} n=0000000065060579
	[  +0.001199] FS-Cache: O-key=[8] '9a6ced0000000000'
	[  +0.000702] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000911] FS-Cache: N-cookie d=000000003a70b87a{9p.inode} n=00000000e3ab24ac
	[  +0.001035] FS-Cache: N-key=[8] '9a6ced0000000000'
	[  +2.693241] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=000000003a70b87a{9p.inode} n=000000008f112200
	[  +0.001044] FS-Cache: O-key=[8] '996ced0000000000'
	[  +0.000769] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000950] FS-Cache: N-cookie d=000000003a70b87a{9p.inode} n=00000000a69e10ca
	[  +0.001055] FS-Cache: N-key=[8] '996ced0000000000'
	[  +0.334896] FS-Cache: Duplicate cookie detected
	[  +0.000796] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000957] FS-Cache: O-cookie d=000000003a70b87a{9p.inode} n=0000000018d51e64
	[  +0.001059] FS-Cache: O-key=[8] 'a16ced0000000000'
	[  +0.000711] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000930] FS-Cache: N-cookie d=000000003a70b87a{9p.inode} n=00000000910ed9b0
	[  +0.001068] FS-Cache: N-key=[8] 'a16ced0000000000'
	[Apr15 23:52] hrtimer: interrupt took 27400523 ns
	[Apr16 00:27] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [33107d331e0b] <==
	2024-04-16 00:44:48.194789 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:44:58.194506 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:45:08.194432 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:45:18.194433 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:45:28.194334 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:45:38.194279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:45:48.194225 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:45:58.194458 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:46:08.194449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:46:18.194433 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:46:28.194305 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:46:38.194305 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:46:48.194457 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:46:58.195130 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:47:08.194352 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:47:18.194558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:47:28.194364 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:47:38.194386 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:47:48.194558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:47:58.199437 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:48:08.194439 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:48:18.194306 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:48:28.194460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:48:38.194538 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:48:48.194429 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [fd5230a8d74b] <==
	raft2024/04/16 00:40:10 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-04-16 00:40:10.090911 I | etcdserver: setting up the initial cluster version to 3.4
	2024-04-16 00:40:10.091617 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-04-16 00:40:10.091854 I | etcdserver/api: enabled capabilities for version 3.4
	2024-04-16 00:40:10.092025 I | etcdserver: published {Name:old-k8s-version-014065 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-04-16 00:40:10.092300 I | embed: ready to serve client requests
	2024-04-16 00:40:10.095626 I | embed: serving client requests on 127.0.0.1:2379
	2024-04-16 00:40:10.095896 I | embed: ready to serve client requests
	2024-04-16 00:40:10.099891 I | embed: serving client requests on 192.168.76.2:2379
	2024-04-16 00:40:29.251021 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:40:29.577533 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:40:39.591649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:40:49.577449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:40:59.577640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:41:09.577658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:41:19.577492 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:41:29.578987 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:41:39.577545 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:41:49.577451 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:41:59.577858 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:42:09.577654 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:42:19.577432 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-16 00:42:29.860439 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2024/04/16 00:42:29 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	2024-04-16 00:42:29.919156 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	
	==> kernel <==
	 00:48:50 up  1:31,  0 users,  load average: 3.51, 3.44, 3.76
	Linux old-k8s-version-014065 5.15.0-1057-aws #63~20.04.1-Ubuntu SMP Mon Mar 25 10:29:14 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [a7d7845d2402] <==
	I0416 00:45:48.553559       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0416 00:45:48.553568       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0416 00:46:04.102434       1 handler_proxy.go:102] no RequestInfo found in the context
	E0416 00:46:04.102507       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 00:46:04.102547       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0416 00:46:25.880729       1 client.go:360] parsed scheme: "passthrough"
	I0416 00:46:25.880783       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0416 00:46:25.880793       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0416 00:47:05.660986       1 client.go:360] parsed scheme: "passthrough"
	I0416 00:47:05.661175       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0416 00:47:05.661197       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0416 00:47:37.369183       1 client.go:360] parsed scheme: "passthrough"
	I0416 00:47:37.369234       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0416 00:47:37.369242       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0416 00:48:01.912415       1 handler_proxy.go:102] no RequestInfo found in the context
	E0416 00:48:01.912500       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 00:48:01.912512       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0416 00:48:07.599015       1 client.go:360] parsed scheme: "passthrough"
	I0416 00:48:07.599061       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0416 00:48:07.599070       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0416 00:48:40.438763       1 client.go:360] parsed scheme: "passthrough"
	I0416 00:48:40.438827       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0416 00:48:40.438837       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [b8ea3fa2ab02] <==
	I0416 00:42:29.989968       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0416 00:42:29.990120       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990173       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990213       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990249       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990277       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990305       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990336       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990370       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990406       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990441       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990477       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990517       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990555       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990592       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990629       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990670       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990710       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990743       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990777       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990820       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990915       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.990962       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.991042       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0416 00:42:29.991076       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-controller-manager [4cc3ed1cf27e] <==
	I0416 00:40:37.046270       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0416 00:40:37.046894       1 event.go:291] "Event occurred" object="old-k8s-version-014065" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-014065 event: Registered Node old-k8s-version-014065 in Controller"
	I0416 00:40:37.047831       1 shared_informer.go:247] Caches are synced for namespace 
	I0416 00:40:37.053395       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0416 00:40:37.063632       1 shared_informer.go:247] Caches are synced for GC 
	I0416 00:40:37.077751       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0416 00:40:37.149839       1 range_allocator.go:373] Set node old-k8s-version-014065 PodCIDR to [10.244.0.0/24]
	I0416 00:40:37.177696       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0416 00:40:37.222724       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-h6wrz"
	I0416 00:40:37.233388       1 shared_informer.go:247] Caches are synced for resource quota 
	I0416 00:40:37.245365       1 shared_informer.go:247] Caches are synced for disruption 
	I0416 00:40:37.245386       1 disruption.go:339] Sending events to api server.
	I0416 00:40:37.252563       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0416 00:40:37.268838       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-ftt5t"
	I0416 00:40:37.269141       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2ltgk"
	I0416 00:40:37.271542       1 shared_informer.go:247] Caches are synced for resource quota 
	I0416 00:40:37.271812       1 shared_informer.go:247] Caches are synced for attach detach 
	I0416 00:40:37.434474       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E0416 00:40:37.468068       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"54eb5f82-4ba2-4f5e-a608-e252f9a59595", ResourceVersion:"268", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63848824820, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001b43c20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001b43c40)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001b43c60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001b60cc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b43
c80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b43ca0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b43ce0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001a91740), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001b1b118), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400002ca80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001b30278)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001b1b168)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0416 00:40:37.644609       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0416 00:40:37.697583       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0416 00:40:37.697607       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0416 00:40:39.853190       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0416 00:40:39.893587       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-h6wrz"
	I0416 00:42:28.456767       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [b3c3c455ea1c] <==
	W0416 00:44:25.201150       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0416 00:44:51.215732       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0416 00:44:56.851554       1 request.go:655] Throttling request took 1.048499968s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0416 00:44:57.702988       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0416 00:45:21.717613       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0416 00:45:29.353353       1 request.go:655] Throttling request took 1.048239333s, request: GET:https://192.168.76.2:8443/apis/batch/v1?timeout=32s
	W0416 00:45:30.209003       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0416 00:45:52.219787       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0416 00:46:01.873944       1 request.go:655] Throttling request took 1.048271953s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0416 00:46:02.725449       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0416 00:46:22.721932       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0416 00:46:34.375910       1 request.go:655] Throttling request took 1.048496301s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
	W0416 00:46:35.227362       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0416 00:46:53.223961       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0416 00:47:06.877849       1 request.go:655] Throttling request took 1.04829265s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0416 00:47:07.729507       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0416 00:47:23.726273       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0416 00:47:39.380146       1 request.go:655] Throttling request took 1.048270054s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0416 00:47:40.231849       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0416 00:47:54.228390       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0416 00:48:11.882511       1 request.go:655] Throttling request took 1.048411725s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0416 00:48:12.733981       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0416 00:48:24.731323       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0416 00:48:44.384421       1 request.go:655] Throttling request took 1.048422269s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0416 00:48:45.236309       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [bf3ceb2acadb] <==
	I0416 00:40:41.627377       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0416 00:40:41.627660       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0416 00:40:41.893248       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0416 00:40:41.893347       1 server_others.go:185] Using iptables Proxier.
	I0416 00:40:41.893556       1 server.go:650] Version: v1.20.0
	I0416 00:40:41.894288       1 config.go:315] Starting service config controller
	I0416 00:40:41.894309       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0416 00:40:41.898863       1 config.go:224] Starting endpoint slice config controller
	I0416 00:40:41.898879       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0416 00:40:41.994410       1 shared_informer.go:247] Caches are synced for service config 
	I0416 00:40:41.999043       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [fa54eb276fa9] <==
	I0416 00:43:03.639865       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0416 00:43:03.639973       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0416 00:43:03.686449       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0416 00:43:03.686583       1 server_others.go:185] Using iptables Proxier.
	I0416 00:43:03.687094       1 server.go:650] Version: v1.20.0
	I0416 00:43:03.687974       1 config.go:315] Starting service config controller
	I0416 00:43:03.687994       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0416 00:43:03.688014       1 config.go:224] Starting endpoint slice config controller
	I0416 00:43:03.688019       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0416 00:43:03.788175       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0416 00:43:03.788118       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [2d7d1b9e8353] <==
	I0416 00:42:55.097742       1 serving.go:331] Generated self-signed cert in-memory
	W0416 00:43:00.996853       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 00:43:00.996889       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 00:43:00.996902       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 00:43:00.996908       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 00:43:01.270542       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 00:43:01.270582       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 00:43:01.273092       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0416 00:43:01.273245       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0416 00:43:01.377695       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [7b437d823755] <==
	W0416 00:40:17.341339       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 00:40:17.341389       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 00:40:17.341413       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 00:40:17.388261       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 00:40:17.388295       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 00:40:17.392734       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0416 00:40:17.392896       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0416 00:40:17.404568       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 00:40:17.409492       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 00:40:17.411893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 00:40:17.414569       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 00:40:17.414995       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 00:40:17.418772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 00:40:17.418796       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 00:40:17.423882       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 00:40:17.424021       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 00:40:17.424165       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 00:40:17.424258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 00:40:17.444837       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 00:40:18.238624       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 00:40:18.254808       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 00:40:18.503867       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 00:40:18.510960       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 00:40:18.536747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0416 00:40:18.888381       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 16 00:46:30 old-k8s-version-014065 kubelet[1223]: E0416 00:46:30.931502    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:46:40 old-k8s-version-014065 kubelet[1223]: E0416 00:46:40.931581    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 16 00:46:45 old-k8s-version-014065 kubelet[1223]: E0416 00:46:45.971319    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:46:53 old-k8s-version-014065 kubelet[1223]: E0416 00:46:53.934295    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 16 00:46:56 old-k8s-version-014065 kubelet[1223]: E0416 00:46:56.930865    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:47:05 old-k8s-version-014065 kubelet[1223]: E0416 00:47:05.931075    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 16 00:47:10 old-k8s-version-014065 kubelet[1223]: E0416 00:47:10.931300    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:47:20 old-k8s-version-014065 kubelet[1223]: E0416 00:47:20.932206    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 16 00:47:25 old-k8s-version-014065 kubelet[1223]: E0416 00:47:25.931180    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:47:31 old-k8s-version-014065 kubelet[1223]: E0416 00:47:31.931916    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 16 00:47:37 old-k8s-version-014065 kubelet[1223]: E0416 00:47:37.931161    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:47:44 old-k8s-version-014065 kubelet[1223]: E0416 00:47:44.931376    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 16 00:47:48 old-k8s-version-014065 kubelet[1223]: E0416 00:47:48.937314    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:47:58 old-k8s-version-014065 kubelet[1223]: E0416 00:47:58.931861    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 16 00:48:03 old-k8s-version-014065 kubelet[1223]: E0416 00:48:03.931295    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:48:09 old-k8s-version-014065 kubelet[1223]: E0416 00:48:09.931027    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 16 00:48:18 old-k8s-version-014065 kubelet[1223]: E0416 00:48:18.930889    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:48:24 old-k8s-version-014065 kubelet[1223]: E0416 00:48:24.930944    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 16 00:48:33 old-k8s-version-014065 kubelet[1223]: E0416 00:48:33.935545    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:48:39 old-k8s-version-014065 kubelet[1223]: E0416 00:48:39.960832    1223 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Apr 16 00:48:39 old-k8s-version-014065 kubelet[1223]: E0416 00:48:39.960874    1223 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Apr 16 00:48:39 old-k8s-version-014065 kubelet[1223]: E0416 00:48:39.962361    1223 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-vbk9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-8k8tv_kube-system(660344
fa-e392-44df-a655-fe53ae49ca62): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Apr 16 00:48:39 old-k8s-version-014065 kubelet[1223]: E0416 00:48:39.962405    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Apr 16 00:48:46 old-k8s-version-014065 kubelet[1223]: E0416 00:48:46.931162    1223 pod_workers.go:191] Error syncing pod 6b782303-4318-44b5-9f42-6664a7d0f2e5 ("dashboard-metrics-scraper-8d5bb5db8-6qzsp_kubernetes-dashboard(6b782303-4318-44b5-9f42-6664a7d0f2e5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 16 00:48:50 old-k8s-version-014065 kubelet[1223]: E0416 00:48:50.931517    1223 pod_workers.go:191] Error syncing pod 660344fa-e392-44df-a655-fe53ae49ca62 ("metrics-server-9975d5f86-8k8tv_kube-system(660344fa-e392-44df-a655-fe53ae49ca62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [c311fb93e11b] <==
	2024/04/16 00:43:26 Using namespace: kubernetes-dashboard
	2024/04/16 00:43:26 Using in-cluster config to connect to apiserver
	2024/04/16 00:43:26 Using secret token for csrf signing
	2024/04/16 00:43:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/04/16 00:43:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/04/16 00:43:26 Successful initial request to the apiserver, version: v1.20.0
	2024/04/16 00:43:26 Generating JWE encryption key
	2024/04/16 00:43:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/04/16 00:43:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/04/16 00:43:26 Initializing JWE encryption key from synchronized object
	2024/04/16 00:43:26 Creating in-cluster Sidecar client
	2024/04/16 00:43:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:43:26 Serving insecurely on HTTP port: 9090
	2024/04/16 00:43:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:44:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:44:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:45:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:45:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:46:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:46:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:47:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:47:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:48:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/16 00:43:26 Starting overwatch
	
	
	==> storage-provisioner [c5eb3f5fa95a] <==
	I0416 00:43:49.094338       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 00:43:49.119390       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 00:43:49.119478       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 00:44:06.621990       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 00:44:06.622663       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff490061-67db-4c7b-9092-84836a794ef0", APIVersion:"v1", ResourceVersion:"803", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-014065_710b4ba9-0fee-4262-83b8-c20ff5069386 became leader
	I0416 00:44:06.622910       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-014065_710b4ba9-0fee-4262-83b8-c20ff5069386!
	I0416 00:44:06.723408       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-014065_710b4ba9-0fee-4262-83b8-c20ff5069386!
	
	
	==> storage-provisioner [d8e8f85e95c4] <==
	I0416 00:43:03.592305       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0416 00:43:33.594359       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-014065 -n old-k8s-version-014065
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-014065 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-8k8tv dashboard-metrics-scraper-8d5bb5db8-6qzsp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-014065 describe pod metrics-server-9975d5f86-8k8tv dashboard-metrics-scraper-8d5bb5db8-6qzsp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-014065 describe pod metrics-server-9975d5f86-8k8tv dashboard-metrics-scraper-8d5bb5db8-6qzsp: exit status 1 (96.059038ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-8k8tv" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-6qzsp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-014065 describe pod metrics-server-9975d5f86-8k8tv dashboard-metrics-scraper-8d5bb5db8-6qzsp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (371.23s)

                                                
                                    

Test pass (321/350)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.44
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.37
9 TestDownloadOnly/v1.20.0/DeleteAll 0.32
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.29.3/json-events 8.5
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.2
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.30.0-rc.2/json-events 6.7
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.19
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.63
31 TestOffline 66.8
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.11
36 TestAddons/Setup 145.06
38 TestAddons/parallel/Registry 16.02
40 TestAddons/parallel/InspektorGadget 11.8
41 TestAddons/parallel/MetricsServer 6.97
44 TestAddons/parallel/CSI 56.11
45 TestAddons/parallel/Headlamp 13.05
46 TestAddons/parallel/CloudSpanner 5.76
47 TestAddons/parallel/LocalPath 52.65
48 TestAddons/parallel/NvidiaDevicePlugin 6.47
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 11.12
54 TestCertOptions 36.17
55 TestCertExpiration 248.46
56 TestDockerFlags 42.77
57 TestForceSystemdFlag 45.85
58 TestForceSystemdEnv 46.05
64 TestErrorSpam/setup 31.31
65 TestErrorSpam/start 0.76
66 TestErrorSpam/status 1.04
67 TestErrorSpam/pause 1.34
68 TestErrorSpam/unpause 1.47
69 TestErrorSpam/stop 11
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 80.34
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 32.13
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.12
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.8
81 TestFunctional/serial/CacheCmd/cache/add_local 1.12
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.14
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
89 TestFunctional/serial/ExtraConfig 42.77
90 TestFunctional/serial/ComponentHealth 0.12
91 TestFunctional/serial/LogsCmd 1.14
92 TestFunctional/serial/LogsFileCmd 1.15
93 TestFunctional/serial/InvalidService 4.68
95 TestFunctional/parallel/ConfigCmd 0.54
96 TestFunctional/parallel/DashboardCmd 13.11
97 TestFunctional/parallel/DryRun 0.47
98 TestFunctional/parallel/InternationalLanguage 0.26
99 TestFunctional/parallel/StatusCmd 1.25
103 TestFunctional/parallel/ServiceCmdConnect 8.75
104 TestFunctional/parallel/AddonsCmd 0.23
105 TestFunctional/parallel/PersistentVolumeClaim 27.37
107 TestFunctional/parallel/SSHCmd 0.63
108 TestFunctional/parallel/CpCmd 2.02
110 TestFunctional/parallel/FileSync 0.34
111 TestFunctional/parallel/CertSync 2.09
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
119 TestFunctional/parallel/License 0.33
120 TestFunctional/parallel/Version/short 0.09
121 TestFunctional/parallel/Version/components 1.08
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
126 TestFunctional/parallel/ImageCommands/ImageBuild 3.31
127 TestFunctional/parallel/ImageCommands/Setup 1.91
128 TestFunctional/parallel/DockerEnv/bash 1.37
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.26
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
133 TestFunctional/parallel/ServiceCmd/DeployApp 11.31
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.08
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.84
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.91
137 TestFunctional/parallel/ServiceCmd/List 0.48
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.54
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
142 TestFunctional/parallel/ServiceCmd/Format 0.45
143 TestFunctional/parallel/ServiceCmd/URL 0.47
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.31
146 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.76
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
157 TestFunctional/parallel/ProfileCmd/profile_list 0.52
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
159 TestFunctional/parallel/MountCmd/any-port 7.53
160 TestFunctional/parallel/MountCmd/specific-port 2.25
161 TestFunctional/parallel/MountCmd/VerifyCleanup 2.45
162 TestFunctional/delete_addon-resizer_images 0.08
163 TestFunctional/delete_my-image_image 0.01
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 137.23
169 TestMultiControlPlane/serial/DeployApp 59.03
170 TestMultiControlPlane/serial/PingHostFromPods 1.74
171 TestMultiControlPlane/serial/AddWorkerNode 25.89
172 TestMultiControlPlane/serial/NodeLabels 0.11
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
174 TestMultiControlPlane/serial/CopyFile 20.41
175 TestMultiControlPlane/serial/StopSecondaryNode 11.72
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.59
177 TestMultiControlPlane/serial/RestartSecondaryNode 63.83
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 247.49
180 TestMultiControlPlane/serial/DeleteSecondaryNode 12.52
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
182 TestMultiControlPlane/serial/StopCluster 32.92
183 TestMultiControlPlane/serial/RestartCluster 152.17
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.58
185 TestMultiControlPlane/serial/AddSecondaryNode 47.76
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
189 TestImageBuild/serial/Setup 34.82
190 TestImageBuild/serial/NormalBuild 2.04
191 TestImageBuild/serial/BuildWithBuildArg 1.03
192 TestImageBuild/serial/BuildWithDockerIgnore 0.8
193 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.8
197 TestJSONOutput/start/Command 75.45
198 TestJSONOutput/start/Audit 0
200 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/pause/Command 0.62
204 TestJSONOutput/pause/Audit 0
206 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/unpause/Command 0.55
210 TestJSONOutput/unpause/Audit 0
212 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
215 TestJSONOutput/stop/Command 5.9
216 TestJSONOutput/stop/Audit 0
218 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
219 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
220 TestErrorJSONOutput 0.22
222 TestKicCustomNetwork/create_custom_network 37.44
223 TestKicCustomNetwork/use_default_bridge_network 31.37
224 TestKicExistingNetwork 32.7
225 TestKicCustomSubnet 36.02
226 TestKicStaticIP 36.47
227 TestMainNoArgs 0.06
228 TestMinikubeProfile 75.82
231 TestMountStart/serial/StartWithMountFirst 8.4
232 TestMountStart/serial/VerifyMountFirst 0.26
233 TestMountStart/serial/StartWithMountSecond 10.63
234 TestMountStart/serial/VerifyMountSecond 0.27
235 TestMountStart/serial/DeleteFirst 1.49
236 TestMountStart/serial/VerifyMountPostDelete 0.27
237 TestMountStart/serial/Stop 1.22
238 TestMountStart/serial/RestartStopped 8.6
239 TestMountStart/serial/VerifyMountPostStop 0.28
242 TestMultiNode/serial/FreshStart2Nodes 76.98
243 TestMultiNode/serial/DeployApp2Nodes 38.1
244 TestMultiNode/serial/PingHostFrom2Pods 1.15
245 TestMultiNode/serial/AddNode 18.69
246 TestMultiNode/serial/MultiNodeLabels 0.12
247 TestMultiNode/serial/ProfileList 0.39
248 TestMultiNode/serial/CopyFile 10.74
249 TestMultiNode/serial/StopNode 2.37
250 TestMultiNode/serial/StartAfterStop 11.61
251 TestMultiNode/serial/RestartKeepsNodes 91.27
252 TestMultiNode/serial/DeleteNode 5.54
253 TestMultiNode/serial/StopMultiNode 21.51
254 TestMultiNode/serial/RestartMultiNode 61.85
255 TestMultiNode/serial/ValidateNameConflict 34.88
260 TestPreload 112.01
262 TestScheduledStopUnix 103.55
263 TestSkaffold 120.4
265 TestInsufficientStorage 14.13
266 TestRunningBinaryUpgrade 79.26
268 TestKubernetesUpgrade 386
269 TestMissingContainerUpgrade 118.14
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
272 TestNoKubernetes/serial/StartWithK8s 45.33
273 TestNoKubernetes/serial/StartWithStopK8s 17.95
274 TestNoKubernetes/serial/Start 8.56
286 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
287 TestNoKubernetes/serial/ProfileList 0.85
288 TestNoKubernetes/serial/Stop 1.32
289 TestNoKubernetes/serial/StartNoArgs 8.75
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
291 TestStoppedBinaryUpgrade/Setup 1.74
292 TestStoppedBinaryUpgrade/Upgrade 117.39
293 TestStoppedBinaryUpgrade/MinikubeLogs 1.35
302 TestPause/serial/Start 52.53
303 TestPause/serial/SecondStartNoReconfiguration 36.93
304 TestPause/serial/Pause 0.64
305 TestPause/serial/VerifyStatus 0.33
306 TestPause/serial/Unpause 0.57
307 TestPause/serial/PauseAgain 0.72
308 TestPause/serial/DeletePaused 2.23
309 TestPause/serial/VerifyDeletedResources 0.34
310 TestNetworkPlugins/group/auto/Start 50.35
311 TestNetworkPlugins/group/auto/KubeletFlags 0.36
312 TestNetworkPlugins/group/auto/NetCatPod 11.46
313 TestNetworkPlugins/group/auto/DNS 0.2
314 TestNetworkPlugins/group/auto/Localhost 0.16
315 TestNetworkPlugins/group/auto/HairPin 0.17
316 TestNetworkPlugins/group/kindnet/Start 69.54
317 TestNetworkPlugins/group/calico/Start 55.7
318 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.57
320 TestNetworkPlugins/group/kindnet/NetCatPod 13.37
321 TestNetworkPlugins/group/kindnet/DNS 0.33
322 TestNetworkPlugins/group/kindnet/Localhost 0.27
323 TestNetworkPlugins/group/kindnet/HairPin 0.25
324 TestNetworkPlugins/group/custom-flannel/Start 75.71
325 TestNetworkPlugins/group/calico/ControllerPod 29.02
326 TestNetworkPlugins/group/calico/KubeletFlags 0.49
327 TestNetworkPlugins/group/calico/NetCatPod 11.49
328 TestNetworkPlugins/group/calico/DNS 0.28
329 TestNetworkPlugins/group/calico/Localhost 0.18
330 TestNetworkPlugins/group/calico/HairPin 0.24
331 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.52
332 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.45
333 TestNetworkPlugins/group/false/Start 58.5
334 TestNetworkPlugins/group/custom-flannel/DNS 0.19
335 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
336 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
337 TestNetworkPlugins/group/enable-default-cni/Start 55.73
338 TestNetworkPlugins/group/false/KubeletFlags 0.39
339 TestNetworkPlugins/group/false/NetCatPod 11.34
340 TestNetworkPlugins/group/false/DNS 0.34
341 TestNetworkPlugins/group/false/Localhost 0.25
342 TestNetworkPlugins/group/false/HairPin 0.29
343 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
344 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.32
345 TestNetworkPlugins/group/flannel/Start 75.1
346 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
347 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
348 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
349 TestNetworkPlugins/group/bridge/Start 96.38
350 TestNetworkPlugins/group/flannel/ControllerPod 6.01
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
352 TestNetworkPlugins/group/flannel/NetCatPod 9.29
353 TestNetworkPlugins/group/flannel/DNS 0.21
354 TestNetworkPlugins/group/flannel/Localhost 0.2
355 TestNetworkPlugins/group/flannel/HairPin 0.24
356 TestNetworkPlugins/group/kubenet/Start 88.35
357 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
358 TestNetworkPlugins/group/bridge/NetCatPod 12.37
359 TestNetworkPlugins/group/bridge/DNS 0.29
360 TestNetworkPlugins/group/bridge/Localhost 0.18
361 TestNetworkPlugins/group/bridge/HairPin 0.17
363 TestStartStop/group/old-k8s-version/serial/FirstStart 164.54
364 TestNetworkPlugins/group/kubenet/KubeletFlags 0.32
365 TestNetworkPlugins/group/kubenet/NetCatPod 9.28
366 TestNetworkPlugins/group/kubenet/DNS 0.27
367 TestNetworkPlugins/group/kubenet/Localhost 0.2
368 TestNetworkPlugins/group/kubenet/HairPin 0.2
370 TestStartStop/group/no-preload/serial/FirstStart 59.32
371 TestStartStop/group/no-preload/serial/DeployApp 8.45
372 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
373 TestStartStop/group/no-preload/serial/Stop 10.85
374 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
375 TestStartStop/group/no-preload/serial/SecondStart 266.38
376 TestStartStop/group/old-k8s-version/serial/DeployApp 9.53
377 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.78
378 TestStartStop/group/old-k8s-version/serial/Stop 11.25
379 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
381 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
383 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
384 TestStartStop/group/no-preload/serial/Pause 3.06
386 TestStartStop/group/embed-certs/serial/FirstStart 49.74
387 TestStartStop/group/embed-certs/serial/DeployApp 8.35
388 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
389 TestStartStop/group/embed-certs/serial/Stop 11.03
390 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
391 TestStartStop/group/embed-certs/serial/SecondStart 266.71
392 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
393 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
394 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
395 TestStartStop/group/old-k8s-version/serial/Pause 2.92
397 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.82
398 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.39
399 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
400 TestStartStop/group/default-k8s-diff-port/serial/Stop 11
401 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
402 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.73
403 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
404 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
405 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
406 TestStartStop/group/embed-certs/serial/Pause 2.9
408 TestStartStop/group/newest-cni/serial/FirstStart 49.29
409 TestStartStop/group/newest-cni/serial/DeployApp 0
410 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
411 TestStartStop/group/newest-cni/serial/Stop 5.77
412 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
413 TestStartStop/group/newest-cni/serial/SecondStart 18.68
414 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
415 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
416 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
417 TestStartStop/group/newest-cni/serial/Pause 2.83
418 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
419 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
420 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
421 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.83
x
+
TestDownloadOnly/v1.20.0/json-events (10.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-781968 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-781968 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (10.436757589s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-781968
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-781968: exit status 85 (370.976495ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-781968 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |          |
	|         | -p download-only-781968        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 23:37:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 23:37:24.727561    7568 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:37:24.727716    7568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:37:24.727740    7568 out.go:304] Setting ErrFile to fd 2...
	I0415 23:37:24.727753    7568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:37:24.728104    7568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	W0415 23:37:24.728266    7568 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18647-2210/.minikube/config/config.json: open /home/jenkins/minikube-integration/18647-2210/.minikube/config/config.json: no such file or directory
	I0415 23:37:24.728711    7568 out.go:298] Setting JSON to true
	I0415 23:37:24.729886    7568 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1180,"bootTime":1713223065,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0415 23:37:24.730005    7568 start.go:139] virtualization:  
	I0415 23:37:24.733744    7568 out.go:97] [download-only-781968] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	W0415 23:37:24.733935    7568 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball: no such file or directory
	I0415 23:37:24.734038    7568 notify.go:220] Checking for updates...
	I0415 23:37:24.736678    7568 out.go:169] MINIKUBE_LOCATION=18647
	I0415 23:37:24.739382    7568 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:37:24.741751    7568 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0415 23:37:24.744409    7568 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	I0415 23:37:24.746449    7568 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0415 23:37:24.750572    7568 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 23:37:24.750939    7568 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:37:24.769779    7568 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 23:37:24.769874    7568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:37:25.100601    7568 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-15 23:37:25.089529611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:37:25.100727    7568 docker.go:295] overlay module found
	I0415 23:37:25.102962    7568 out.go:97] Using the docker driver based on user configuration
	I0415 23:37:25.102995    7568 start.go:297] selected driver: docker
	I0415 23:37:25.103003    7568 start.go:901] validating driver "docker" against <nil>
	I0415 23:37:25.103130    7568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:37:25.152511    7568 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-15 23:37:25.142752003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:37:25.152684    7568 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 23:37:25.153006    7568 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0415 23:37:25.153225    7568 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 23:37:25.156303    7568 out.go:169] Using Docker driver with root privileges
	I0415 23:37:25.158950    7568 cni.go:84] Creating CNI manager for ""
	I0415 23:37:25.158995    7568 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0415 23:37:25.159084    7568 start.go:340] cluster config:
	{Name:download-only-781968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-781968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:37:25.161784    7568 out.go:97] Starting "download-only-781968" primary control-plane node in "download-only-781968" cluster
	I0415 23:37:25.161835    7568 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 23:37:25.164227    7568 out.go:97] Pulling base image v0.0.43-1713215244-18647 ...
	I0415 23:37:25.164280    7568 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 23:37:25.164368    7568 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 23:37:25.178991    7568 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af to local cache
	I0415 23:37:25.179167    7568 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory
	I0415 23:37:25.179316    7568 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af to local cache
	I0415 23:37:25.253804    7568 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0415 23:37:25.253832    7568 cache.go:56] Caching tarball of preloaded images
	I0415 23:37:25.253993    7568 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 23:37:25.256895    7568 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 23:37:25.256931    7568 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0415 23:37:25.370615    7568 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0415 23:37:31.535881    7568 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0415 23:37:31.535980    7568 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0415 23:37:32.676877    7568 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0415 23:37:32.677308    7568 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/download-only-781968/config.json ...
	I0415 23:37:32.677488    7568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/download-only-781968/config.json: {Name:mk90b2fdb9ae44f952ef26c1db182473655d332e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:37:32.677670    7568 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0415 23:37:32.677905    7568 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18647-2210/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-781968 host does not exist
	  To start a cluster, run: "minikube start -p download-only-781968"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-781968
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (8.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-716845 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-716845 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.50258021s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (8.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-716845
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-716845: exit status 85 (77.27588ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-781968 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | -p download-only-781968        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| delete  | -p download-only-781968        | download-only-781968 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| start   | -o=json --download-only        | download-only-716845 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | -p download-only-716845        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=docker                |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 23:37:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 23:37:36.089200    7737 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:37:36.089336    7737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:37:36.089341    7737 out.go:304] Setting ErrFile to fd 2...
	I0415 23:37:36.089347    7737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:37:36.089680    7737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0415 23:37:36.090154    7737 out.go:298] Setting JSON to true
	I0415 23:37:36.090935    7737 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1191,"bootTime":1713223065,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0415 23:37:36.091014    7737 start.go:139] virtualization:  
	I0415 23:37:36.122180    7737 out.go:97] [download-only-716845] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0415 23:37:36.150124    7737 out.go:169] MINIKUBE_LOCATION=18647
	I0415 23:37:36.122625    7737 notify.go:220] Checking for updates...
	I0415 23:37:36.200631    7737 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:37:36.233590    7737 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0415 23:37:36.251369    7737 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	I0415 23:37:36.276325    7737 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0415 23:37:36.338752    7737 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 23:37:36.339067    7737 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:37:36.357641    7737 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 23:37:36.357764    7737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:37:36.425655    7737 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-15 23:37:36.416245369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:37:36.425775    7737 docker.go:295] overlay module found
	I0415 23:37:36.427924    7737 out.go:97] Using the docker driver based on user configuration
	I0415 23:37:36.427955    7737 start.go:297] selected driver: docker
	I0415 23:37:36.427962    7737 start.go:901] validating driver "docker" against <nil>
	I0415 23:37:36.428077    7737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:37:36.495001    7737 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-15 23:37:36.480253497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:37:36.495175    7737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 23:37:36.495494    7737 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0415 23:37:36.495647    7737 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 23:37:36.498314    7737 out.go:169] Using Docker driver with root privileges
	I0415 23:37:36.501011    7737 cni.go:84] Creating CNI manager for ""
	I0415 23:37:36.501046    7737 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 23:37:36.501064    7737 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 23:37:36.501154    7737 start.go:340] cluster config:
	{Name:download-only-716845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-716845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:37:36.503572    7737 out.go:97] Starting "download-only-716845" primary control-plane node in "download-only-716845" cluster
	I0415 23:37:36.503614    7737 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 23:37:36.506301    7737 out.go:97] Pulling base image v0.0.43-1713215244-18647 ...
	I0415 23:37:36.506329    7737 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 23:37:36.506510    7737 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 23:37:36.519686    7737 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af to local cache
	I0415 23:37:36.519809    7737 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory
	I0415 23:37:36.519834    7737 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory, skipping pull
	I0415 23:37:36.519843    7737 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in cache, skipping pull
	I0415 23:37:36.519851    7737 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af as a tarball
	I0415 23:37:36.579264    7737 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0415 23:37:36.579295    7737 cache.go:56] Caching tarball of preloaded images
	I0415 23:37:36.579461    7737 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0415 23:37:36.582503    7737 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0415 23:37:36.582532    7737 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0415 23:37:36.693322    7737 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4?checksum=md5:c0bb0715201da444334d968c298f45eb -> /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-716845 host does not exist
	  To start a cluster, run: "minikube start -p download-only-716845"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-716845
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (6.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-788547 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-788547 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.695161116s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (6.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-788547
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-788547: exit status 85 (80.37946ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-781968 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | -p download-only-781968           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| delete  | -p download-only-781968           | download-only-781968 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| start   | -o=json --download-only           | download-only-716845 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | -p download-only-716845           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| delete  | -p download-only-716845           | download-only-716845 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC | 15 Apr 24 23:37 UTC |
	| start   | -o=json --download-only           | download-only-788547 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | -p download-only-788547           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|         | --driver=docker                   |                      |         |                |                     |                     |
	|         | --container-runtime=docker        |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 23:37:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 23:37:45.001989    7898 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:37:45.002247    7898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:37:45.002271    7898 out.go:304] Setting ErrFile to fd 2...
	I0415 23:37:45.002290    7898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:37:45.002571    7898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0415 23:37:45.003056    7898 out.go:298] Setting JSON to true
	I0415 23:37:45.003943    7898 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1200,"bootTime":1713223065,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0415 23:37:45.004053    7898 start.go:139] virtualization:  
	I0415 23:37:45.016099    7898 out.go:97] [download-only-788547] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0415 23:37:45.029628    7898 out.go:169] MINIKUBE_LOCATION=18647
	I0415 23:37:45.017307    7898 notify.go:220] Checking for updates...
	I0415 23:37:45.054442    7898 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:37:45.062960    7898 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0415 23:37:45.067228    7898 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	I0415 23:37:45.071877    7898 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0415 23:37:45.078872    7898 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 23:37:45.079230    7898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:37:45.116926    7898 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 23:37:45.117049    7898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:37:45.269416    7898 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-15 23:37:45.258813597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:37:45.269540    7898 docker.go:295] overlay module found
	I0415 23:37:45.272438    7898 out.go:97] Using the docker driver based on user configuration
	I0415 23:37:45.272485    7898 start.go:297] selected driver: docker
	I0415 23:37:45.272494    7898 start.go:901] validating driver "docker" against <nil>
	I0415 23:37:45.272636    7898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:37:45.337137    7898 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-15 23:37:45.324135229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:37:45.337318    7898 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 23:37:45.337716    7898 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0415 23:37:45.337891    7898 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 23:37:45.341109    7898 out.go:169] Using Docker driver with root privileges
	I0415 23:37:45.343976    7898 cni.go:84] Creating CNI manager for ""
	I0415 23:37:45.344026    7898 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0415 23:37:45.344047    7898 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 23:37:45.344144    7898 start.go:340] cluster config:
	{Name:download-only-788547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-788547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:37:45.346803    7898 out.go:97] Starting "download-only-788547" primary control-plane node in "download-only-788547" cluster
	I0415 23:37:45.346847    7898 cache.go:121] Beginning downloading kic base image for docker with docker
	I0415 23:37:45.350142    7898 out.go:97] Pulling base image v0.0.43-1713215244-18647 ...
	I0415 23:37:45.350314    7898 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 23:37:45.350396    7898 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local docker daemon
	I0415 23:37:45.368494    7898 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af to local cache
	I0415 23:37:45.368648    7898 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory
	I0415 23:37:45.368679    7898 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af in local cache directory, skipping pull
	I0415 23:37:45.368685    7898 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af exists in cache, skipping pull
	I0415 23:37:45.368695    7898 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af as a tarball
	I0415 23:37:45.417796    7898 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0415 23:37:45.417821    7898 cache.go:56] Caching tarball of preloaded images
	I0415 23:37:45.417991    7898 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime docker
	I0415 23:37:45.421236    7898 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0415 23:37:45.421268    7898 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0415 23:37:45.526036    7898 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:f0cbac72359c845c6afc5b35133f3fed -> /home/jenkins/minikube-integration/18647-2210/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-788547 host does not exist
	  To start a cluster, run: "minikube start -p download-only-788547"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-788547
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-763588 --alsologtostderr --binary-mirror http://127.0.0.1:35985 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-763588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-763588
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (66.8s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-017306 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-017306 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m4.047673725s)
helpers_test.go:175: Cleaning up "offline-docker-017306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-017306
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-017306: (2.747497872s)
--- PASS: TestOffline (66.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-716538
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-716538: exit status 85 (94.95493ms)

                                                
                                                
-- stdout --
	* Profile "addons-716538" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-716538"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-716538
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-716538: exit status 85 (114.142182ms)

                                                
                                                
-- stdout --
	* Profile "addons-716538" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-716538"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/Setup (145.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-716538 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-716538 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m25.06080699s)
--- PASS: TestAddons/Setup (145.06s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 45.074226ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kkk4n" [7b7d970c-ef98-4622-a315-2fff1161f506] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004518579s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kx5gv" [c7a468da-2d16-4683-a615-a02aee9f0e45] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012345942s
addons_test.go:340: (dbg) Run:  kubectl --context addons-716538 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-716538 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-716538 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.826095727s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cfccg" [2bf72dd4-3755-4f81-b115-f3fca5b12c1a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004316928s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-716538
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-716538: (5.796065821s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 6.979695ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-hz8lx" [479916e2-3562-4ddc-b0b7-942de45464b8] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007037836s
addons_test.go:415: (dbg) Run:  kubectl --context addons-716538 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 44.779315ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-716538 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/15 23:40:34 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-716538 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [af6e0ed7-1b3b-4089-b5cf-12813fa25156] Pending
helpers_test.go:344: "task-pv-pod" [af6e0ed7-1b3b-4089-b5cf-12813fa25156] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [af6e0ed7-1b3b-4089-b5cf-12813fa25156] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.01196597s
addons_test.go:584: (dbg) Run:  kubectl --context addons-716538 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-716538 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-716538 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-716538 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-716538 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-716538 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-716538 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [19ce8cbc-cbb0-4d2b-b2c9-4fa03144580c] Pending
helpers_test.go:344: "task-pv-pod-restore" [19ce8cbc-cbb0-4d2b-b2c9-4fa03144580c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [19ce8cbc-cbb0-4d2b-b2c9-4fa03144580c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004880999s
addons_test.go:626: (dbg) Run:  kubectl --context addons-716538 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-716538 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-716538 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-716538 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.807526175s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-716538 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-716538 --alsologtostderr -v=1: (1.044245359s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-8kgkm" [8c2192dc-4569-44ee-9ae6-92669a4a99b9] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-8kgkm" [8c2192dc-4569-44ee-9ae6-92669a4a99b9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-8kgkm" [8c2192dc-4569-44ee-9ae6-92669a4a99b9] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003966377s
--- PASS: TestAddons/parallel/Headlamp (13.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-gmw2t" [6611e8ee-89a4-4ff1-908b-52312d162403] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005170817s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-716538
--- PASS: TestAddons/parallel/CloudSpanner (5.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.65s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-716538 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-716538 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-716538 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7a6ec00a-8acd-40d3-ac1a-a0b58d4e1648] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7a6ec00a-8acd-40d3-ac1a-a0b58d4e1648] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7a6ec00a-8acd-40d3-ac1a-a0b58d4e1648] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005322607s
addons_test.go:891: (dbg) Run:  kubectl --context addons-716538 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 ssh "cat /opt/local-path-provisioner/pvc-c12d040b-6548-4227-bc97-a3872884195b_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-716538 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-716538 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-716538 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-716538 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.450962403s)
--- PASS: TestAddons/parallel/LocalPath (52.65s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sfnhr" [cf47f930-c16f-4bc9-94a8-8abe11547e86] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003885325s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-716538
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-q6l7s" [45e5790a-7f20-4399-9cbb-1d4b38ba6e3e] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006558344s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-716538 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-716538 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-716538
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-716538: (10.831213004s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-716538
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-716538
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-716538
--- PASS: TestAddons/StoppedEnableDisable (11.12s)

                                                
                                    
x
+
TestCertOptions (36.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-854560 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-854560 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (33.377236228s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-854560 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-854560 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-854560 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-854560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-854560
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-854560: (2.146901982s)
--- PASS: TestCertOptions (36.17s)

                                                
                                    
x
+
TestCertExpiration (248.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-714860 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-714860 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (41.648356159s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-714860 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-714860 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.405291557s)
helpers_test.go:175: Cleaning up "cert-expiration-714860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-714860
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-714860: (2.406066919s)
--- PASS: TestCertExpiration (248.46s)

                                                
                                    
x
+
TestDockerFlags (42.77s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-465632 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-465632 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.396352517s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-465632 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-465632 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-465632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-465632
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-465632: (2.496243133s)
--- PASS: TestDockerFlags (42.77s)

                                                
                                    
x
+
TestForceSystemdFlag (45.85s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-379250 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-379250 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.250047806s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-379250 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-379250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-379250
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-379250: (2.18495464s)
--- PASS: TestForceSystemdFlag (45.85s)

                                                
                                    
x
+
TestForceSystemdEnv (46.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-268643 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-268643 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.941430069s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-268643 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-268643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-268643
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-268643: (2.554990126s)
--- PASS: TestForceSystemdEnv (46.05s)

                                                
                                    
x
+
TestErrorSpam/setup (31.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-159822 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-159822 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-159822 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-159822 --driver=docker  --container-runtime=docker: (31.30754969s)
--- PASS: TestErrorSpam/setup (31.31s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (11s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 stop: (10.792862185s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-159822 --log_dir /tmp/nospam-159822 stop
--- PASS: TestErrorSpam/stop (11.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18647-2210/.minikube/files/etc/test/nested/copy/7563/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-673373 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-673373 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m20.34017879s)
--- PASS: TestFunctional/serial/StartWithProxy (80.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-673373 --alsologtostderr -v=8
E0415 23:45:18.834699    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:18.842106    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:18.852337    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:18.875331    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:18.915576    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:18.995806    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:19.155957    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:19.476358    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:20.117395    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:21.403571    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:23.964581    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:29.085223    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-673373 --alsologtostderr -v=8: (32.121993483s)
functional_test.go:659: soft start took 32.125853761s for "functional-673373" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-673373 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-673373 /tmp/TestFunctionalserialCacheCmdcacheadd_local3768527952/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 cache add minikube-local-cache-test:functional-673373
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 cache delete minikube-local-cache-test:functional-673373
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-673373
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-673373 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.936556ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 kubectl -- --context functional-673373 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-673373 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-673373 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0415 23:45:39.325702    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:45:59.805882    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-673373 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.773102657s)
functional_test.go:757: restart took 42.773217494s for "functional-673373" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-673373 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-673373 logs: (1.137656279s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 logs --file /tmp/TestFunctionalserialLogsFileCmd3318441110/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-673373 logs --file /tmp/TestFunctionalserialLogsFileCmd3318441110/001/logs.txt: (1.151736581s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-673373 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-673373
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-673373: exit status 115 (594.713408ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30549 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-673373 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-673373 config get cpus: exit status 14 (92.821808ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-673373 config get cpus: exit status 14 (105.969701ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-673373 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-673373 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 45941: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-673373 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-673373 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (182.492482ms)

                                                
                                                
-- stdout --
	* [functional-673373] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 23:47:15.657673   45391 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:47:15.657863   45391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:47:15.657894   45391 out.go:304] Setting ErrFile to fd 2...
	I0415 23:47:15.657918   45391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:47:15.658200   45391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0415 23:47:15.658608   45391 out.go:298] Setting JSON to false
	I0415 23:47:15.659688   45391 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1771,"bootTime":1713223065,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0415 23:47:15.659792   45391 start.go:139] virtualization:  
	I0415 23:47:15.662691   45391 out.go:177] * [functional-673373] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	I0415 23:47:15.665515   45391 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 23:47:15.668070   45391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:47:15.665620   45391 notify.go:220] Checking for updates...
	I0415 23:47:15.670546   45391 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0415 23:47:15.672682   45391 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	I0415 23:47:15.674856   45391 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0415 23:47:15.677556   45391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 23:47:15.679985   45391 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 23:47:15.680535   45391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:47:15.700210   45391 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 23:47:15.700325   45391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:47:15.764251   45391 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-15 23:47:15.755249323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:47:15.764361   45391 docker.go:295] overlay module found
	I0415 23:47:15.766667   45391 out.go:177] * Using the docker driver based on existing profile
	I0415 23:47:15.768752   45391 start.go:297] selected driver: docker
	I0415 23:47:15.768773   45391 start.go:901] validating driver "docker" against &{Name:functional-673373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-673373 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:47:15.768882   45391 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 23:47:15.771854   45391 out.go:177] 
	W0415 23:47:15.773849   45391 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0415 23:47:15.775882   45391 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-673373 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-673373 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-673373 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (256.1725ms)

                                                
                                                
-- stdout --
	* [functional-673373] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 23:47:16.143948   45499 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:47:16.144154   45499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:47:16.144180   45499 out.go:304] Setting ErrFile to fd 2...
	I0415 23:47:16.144202   45499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:47:16.145195   45499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0415 23:47:16.145672   45499 out.go:298] Setting JSON to false
	I0415 23:47:16.146873   45499 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1772,"bootTime":1713223065,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1057-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0415 23:47:16.146980   45499 start.go:139] virtualization:  
	I0415 23:47:16.149622   45499 out.go:177] * [functional-673373] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (arm64)
	I0415 23:47:16.152537   45499 notify.go:220] Checking for updates...
	I0415 23:47:16.154913   45499 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 23:47:16.157109   45499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:47:16.159346   45499 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	I0415 23:47:16.161839   45499 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	I0415 23:47:16.166452   45499 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0415 23:47:16.168748   45499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 23:47:16.171642   45499 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 23:47:16.172164   45499 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:47:16.208624   45499 docker.go:122] docker version: linux-26.0.1:Docker Engine - Community
	I0415 23:47:16.208750   45499 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:47:16.286461   45499 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-15 23:47:16.277015613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:47:16.286570   45499 docker.go:295] overlay module found
	I0415 23:47:16.289537   45499 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0415 23:47:16.294658   45499 start.go:297] selected driver: docker
	I0415 23:47:16.294682   45499 start.go:901] validating driver "docker" against &{Name:functional-673373 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-673373 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:47:16.294819   45499 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 23:47:16.297722   45499 out.go:177] 
	W0415 23:47:16.301056   45499 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0415 23:47:16.304067   45499 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-673373 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-673373 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-swkhv" [525c327b-dc8a-4c1c-8a9f-ddecfced1b82] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-swkhv" [525c327b-dc8a-4c1c-8a9f-ddecfced1b82] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003748917s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32325
functional_test.go:1671: http://192.168.49.2:32325: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-swkhv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32325
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [88864a1c-f1ca-4864-bb10-337881430146] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004877234s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-673373 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-673373 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-673373 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-673373 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b0ebe959-40fe-4255-83a1-7530177e5eb5] Pending
helpers_test.go:344: "sp-pod" [b0ebe959-40fe-4255-83a1-7530177e5eb5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b0ebe959-40fe-4255-83a1-7530177e5eb5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003591683s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-673373 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-673373 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-673373 delete -f testdata/storage-provisioner/pod.yaml: (1.145089841s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-673373 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fa84ff15-2eab-4df0-a9ea-35bb2d6958e1] Pending
helpers_test.go:344: "sp-pod" [fa84ff15-2eab-4df0-a9ea-35bb2d6958e1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fa84ff15-2eab-4df0-a9ea-35bb2d6958e1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005460894s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-673373 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh -n functional-673373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 cp functional-673373:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd93764220/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh -n functional-673373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh -n functional-673373 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7563/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "sudo cat /etc/test/nested/copy/7563/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7563.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "sudo cat /etc/ssl/certs/7563.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7563.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "sudo cat /usr/share/ca-certificates/7563.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/75632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "sudo cat /etc/ssl/certs/75632.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/75632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "sudo cat /usr/share/ca-certificates/75632.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-673373 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-673373 ssh "sudo systemctl is-active crio": exit status 1 (360.209245ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-673373 version -o=json --components: (1.079842073s)
--- PASS: TestFunctional/parallel/Version/components (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-673373 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-673373
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-673373
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-673373 image ls --format short --alsologtostderr:
I0415 23:47:23.904392   46936 out.go:291] Setting OutFile to fd 1 ...
I0415 23:47:23.904623   46936 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:47:23.904652   46936 out.go:304] Setting ErrFile to fd 2...
I0415 23:47:23.904670   46936 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:47:23.904958   46936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
I0415 23:47:23.905697   46936 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 23:47:23.905874   46936 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 23:47:23.906386   46936 cli_runner.go:164] Run: docker container inspect functional-673373 --format={{.State.Status}}
I0415 23:47:23.924692   46936 ssh_runner.go:195] Run: systemctl --version
I0415 23:47:23.924750   46936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-673373
I0415 23:47:23.943742   46936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/functional-673373/id_rsa Username:docker}
I0415 23:47:24.048735   46936 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-673373 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/localhost/my-image                | functional-673373 | a1fbdba65db39 | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-673373 | 0796eca8ad569 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.29.3           | 0e9b4a0d1e86d | 85.5MB |
| docker.io/library/nginx                     | alpine            | b8c82647e8a25 | 43.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-673373 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-scheduler              | v1.29.3           | 4b51f9f6bc9b9 | 58.1MB |
| docker.io/library/nginx                     | latest            | 48b4217efe5ed | 192MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/kube-apiserver              | v1.29.3           | 2581114f5709d | 123MB  |
| registry.k8s.io/kube-controller-manager     | v1.29.3           | 121d70d9a3805 | 118MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-673373 image ls --format table --alsologtostderr:
I0415 23:47:27.927880   47415 out.go:291] Setting OutFile to fd 1 ...
I0415 23:47:27.928023   47415 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:47:27.928054   47415 out.go:304] Setting ErrFile to fd 2...
I0415 23:47:27.928066   47415 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:47:27.928453   47415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
I0415 23:47:27.929534   47415 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 23:47:27.929729   47415 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 23:47:27.930484   47415 cli_runner.go:164] Run: docker container inspect functional-673373 --format={{.State.Status}}
I0415 23:47:27.948362   47415 ssh_runner.go:195] Run: systemctl --version
I0415 23:47:27.948423   47415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-673373
I0415 23:47:27.964291   47415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/functional-673373/id_rsa Username:docker}
I0415 23:47:28.063900   47415 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/04/15 23:47:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-673373 image ls --format json --alsologtostderr:
[{"id":"4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"58100000"},{"id":"121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"118000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"0796eca8ad5693ecde8f90719bd42b5dbe8127dcfc648befc5964550b8d4a32c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-673373"],"size":"30"},{"id":"2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"123000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"24400
0000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-673373"],"size":"32900000"},{"id":"b8c82647e8a2586145e422943ae4c69c9b1600db636e126
9efd256360eb396b0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43600000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"48b4217efe5ed7e85a8946668b6adedb8242a5433da2c53273fb4c112f4c5d99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"a1fbdba65db39511857938b878503f4a3dc8c5f87bc7cec38e2b11fa7cc90298","repoDigests":[],"repoTags":["docker.io/localhost/my-image:
functional-673373"],"size":"1410000"},{"id":"0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"85500000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-673373 image ls --format json --alsologtostderr:
I0415 23:47:27.707856   47388 out.go:291] Setting OutFile to fd 1 ...
I0415 23:47:27.708075   47388 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:47:27.708104   47388 out.go:304] Setting ErrFile to fd 2...
I0415 23:47:27.708123   47388 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:47:27.708426   47388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
I0415 23:47:27.709100   47388 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 23:47:27.709279   47388 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 23:47:27.709835   47388 cli_runner.go:164] Run: docker container inspect functional-673373 --format={{.State.Status}}
I0415 23:47:27.725960   47388 ssh_runner.go:195] Run: systemctl --version
I0415 23:47:27.726010   47388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-673373
I0415 23:47:27.741738   47388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/functional-673373/id_rsa Username:docker}
I0415 23:47:27.839703   47388 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-673373 image ls --format yaml --alsologtostderr:
- id: 48b4217efe5ed7e85a8946668b6adedb8242a5433da2c53273fb4c112f4c5d99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "123000000"
- id: 4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "58100000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 0796eca8ad5693ecde8f90719bd42b5dbe8127dcfc648befc5964550b8d4a32c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-673373
size: "30"
- id: b8c82647e8a2586145e422943ae4c69c9b1600db636e1269efd256360eb396b0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43600000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "85500000"
- id: 121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "118000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-673373
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-673373 image ls --format yaml --alsologtostderr:
I0415 23:47:24.170673   46963 out.go:291] Setting OutFile to fd 1 ...
I0415 23:47:24.171007   46963 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:47:24.171038   46963 out.go:304] Setting ErrFile to fd 2...
I0415 23:47:24.171059   46963 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:47:24.171385   46963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
I0415 23:47:24.172034   46963 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 23:47:24.172210   46963 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 23:47:24.172716   46963 cli_runner.go:164] Run: docker container inspect functional-673373 --format={{.State.Status}}
I0415 23:47:24.189964   46963 ssh_runner.go:195] Run: systemctl --version
I0415 23:47:24.190018   46963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-673373
I0415 23:47:24.208659   46963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/functional-673373/id_rsa Username:docker}
I0415 23:47:24.307944   46963 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-673373 ssh pgrep buildkitd: exit status 1 (363.080993ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image build -t localhost/my-image:functional-673373 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-673373 image build -t localhost/my-image:functional-673373 testdata/build --alsologtostderr: (2.715603406s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-673373 image build -t localhost/my-image:functional-673373 testdata/build --alsologtostderr:
I0415 23:47:24.796149   47041 out.go:291] Setting OutFile to fd 1 ...
I0415 23:47:24.796489   47041 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:47:24.796521   47041 out.go:304] Setting ErrFile to fd 2...
I0415 23:47:24.796550   47041 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:47:24.796850   47041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
I0415 23:47:24.797856   47041 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 23:47:24.799731   47041 config.go:182] Loaded profile config "functional-673373": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0415 23:47:24.800813   47041 cli_runner.go:164] Run: docker container inspect functional-673373 --format={{.State.Status}}
I0415 23:47:24.823997   47041 ssh_runner.go:195] Run: systemctl --version
I0415 23:47:24.824045   47041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-673373
I0415 23:47:24.841515   47041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/functional-673373/id_rsa Username:docker}
I0415 23:47:24.944041   47041 build_images.go:161] Building image from path: /tmp/build.2854248981.tar
I0415 23:47:24.944114   47041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0415 23:47:24.954973   47041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2854248981.tar
I0415 23:47:24.958901   47041 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2854248981.tar: stat -c "%s %y" /var/lib/minikube/build/build.2854248981.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2854248981.tar': No such file or directory
I0415 23:47:24.958934   47041 ssh_runner.go:362] scp /tmp/build.2854248981.tar --> /var/lib/minikube/build/build.2854248981.tar (3072 bytes)
I0415 23:47:24.995454   47041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2854248981
I0415 23:47:25.008835   47041 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2854248981 -xf /var/lib/minikube/build/build.2854248981.tar
I0415 23:47:25.025650   47041 docker.go:360] Building image: /var/lib/minikube/build/build.2854248981
I0415 23:47:25.025760   47041 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-673373 /var/lib/minikube/build/build.2854248981
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:a1fbdba65db39511857938b878503f4a3dc8c5f87bc7cec38e2b11fa7cc90298 done
#8 naming to localhost/my-image:functional-673373 done
#8 DONE 0.1s
I0415 23:47:27.391313   47041 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-673373 /var/lib/minikube/build/build.2854248981: (2.365528295s)
I0415 23:47:27.391409   47041 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2854248981
I0415 23:47:27.401939   47041 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2854248981.tar
I0415 23:47:27.412283   47041 build_images.go:217] Built localhost/my-image:functional-673373 from /tmp/build.2854248981.tar
I0415 23:47:27.412380   47041 build_images.go:133] succeeded building to: functional-673373
I0415 23:47:27.412404   47041 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.885337691s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-673373
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-673373 docker-env) && out/minikube-linux-arm64 status -p functional-673373"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-673373 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image load --daemon gcr.io/google-containers/addon-resizer:functional-673373 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-673373 image load --daemon gcr.io/google-containers/addon-resizer:functional-673373 --alsologtostderr: (4.028848679s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-673373 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-673373 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-r4g6n" [fcc580a9-b2a7-4021-870a-6e4d43d0c14f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-r4g6n" [fcc580a9-b2a7-4021-870a-6e4d43d0c14f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.007369507s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image load --daemon gcr.io/google-containers/addon-resizer:functional-673373 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-673373 image load --daemon gcr.io/google-containers/addon-resizer:functional-673373 --alsologtostderr: (2.846681069s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.333380816s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-673373
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image load --daemon gcr.io/google-containers/addon-resizer:functional-673373 --alsologtostderr
E0415 23:46:40.766228    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-673373 image load --daemon gcr.io/google-containers/addon-resizer:functional-673373 --alsologtostderr: (3.264577659s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image save gcr.io/google-containers/addon-resizer:functional-673373 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image rm gcr.io/google-containers/addon-resizer:functional-673373 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 service list -o json
functional_test.go:1490: Took "515.807959ms" to run "out/minikube-linux-arm64 -p functional-673373 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-673373 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.271737626s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32615
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32615
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-673373
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 image save --daemon gcr.io/google-containers/addon-resizer:functional-673373 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-673373 image save --daemon gcr.io/google-containers/addon-resizer:functional-673373 --alsologtostderr: (1.278168624s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-673373
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-673373 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-673373 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-673373 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 43250: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-673373 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-673373 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-673373 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f2aac6a2-9063-481d-80d8-557060c0cf50] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f2aac6a2-9063-481d-80d8-557060c0cf50] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00465043s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-673373 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.16.251 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-673373 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "450.697843ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "71.297469ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "425.039681ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "80.634501ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdany-port1528156337/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713224829833432309" to /tmp/TestFunctionalparallelMountCmdany-port1528156337/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713224829833432309" to /tmp/TestFunctionalparallelMountCmdany-port1528156337/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713224829833432309" to /tmp/TestFunctionalparallelMountCmdany-port1528156337/001/test-1713224829833432309
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (381.13943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 15 23:47 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 15 23:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 15 23:47 test-1713224829833432309
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh cat /mount-9p/test-1713224829833432309
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-673373 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1a11f5a0-c9ff-489f-9e79-ad0d8c99f496] Pending
helpers_test.go:344: "busybox-mount" [1a11f5a0-c9ff-489f-9e79-ad0d8c99f496] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1a11f5a0-c9ff-489f-9e79-ad0d8c99f496] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1a11f5a0-c9ff-489f-9e79-ad0d8c99f496] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004593092s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-673373 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdany-port1528156337/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdspecific-port3310519766/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (499.109216ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdspecific-port3310519766/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-673373 ssh "sudo umount -f /mount-9p": exit status 1 (326.596653ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-673373 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdspecific-port3310519766/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780952527/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780952527/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780952527/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T" /mount1: exit status 1 (603.558097ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-673373 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-673373 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780952527/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780952527/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-673373 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780952527/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.45s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-673373
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-673373
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-673373
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (137.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-438522 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0415 23:48:02.686654    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-438522 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m16.393421803s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (137.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (59.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-438522 -- rollout status deployment/busybox: (3.642336124s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0415 23:50:18.834674    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-nk6h4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-xdfdt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-xfc5f -- nslookup kubernetes.io
E0415 23:50:46.529062    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-nk6h4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-xdfdt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-xfc5f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-nk6h4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-xdfdt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-xfc5f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (59.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-nk6h4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-nk6h4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-xdfdt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-xdfdt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-xfc5f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-438522 -- exec busybox-7fdf7869d9-xfc5f -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-438522 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-438522 -v=7 --alsologtostderr: (24.733827891s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr: (1.152518407s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-438522 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-438522 status --output json -v=7 --alsologtostderr: (1.108946409s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp testdata/cp-test.txt ha-438522:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3055564306/001/cp-test_ha-438522.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522:/home/docker/cp-test.txt ha-438522-m02:/home/docker/cp-test_ha-438522_ha-438522-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m02 "sudo cat /home/docker/cp-test_ha-438522_ha-438522-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522:/home/docker/cp-test.txt ha-438522-m03:/home/docker/cp-test_ha-438522_ha-438522-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m03 "sudo cat /home/docker/cp-test_ha-438522_ha-438522-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522:/home/docker/cp-test.txt ha-438522-m04:/home/docker/cp-test_ha-438522_ha-438522-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m04 "sudo cat /home/docker/cp-test_ha-438522_ha-438522-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp testdata/cp-test.txt ha-438522-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3055564306/001/cp-test_ha-438522-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m02:/home/docker/cp-test.txt ha-438522:/home/docker/cp-test_ha-438522-m02_ha-438522.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522 "sudo cat /home/docker/cp-test_ha-438522-m02_ha-438522.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m02:/home/docker/cp-test.txt ha-438522-m03:/home/docker/cp-test_ha-438522-m02_ha-438522-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m03 "sudo cat /home/docker/cp-test_ha-438522-m02_ha-438522-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m02:/home/docker/cp-test.txt ha-438522-m04:/home/docker/cp-test_ha-438522-m02_ha-438522-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m04 "sudo cat /home/docker/cp-test_ha-438522-m02_ha-438522-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp testdata/cp-test.txt ha-438522-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3055564306/001/cp-test_ha-438522-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m03:/home/docker/cp-test.txt ha-438522:/home/docker/cp-test_ha-438522-m03_ha-438522.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522 "sudo cat /home/docker/cp-test_ha-438522-m03_ha-438522.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m03:/home/docker/cp-test.txt ha-438522-m02:/home/docker/cp-test_ha-438522-m03_ha-438522-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m02 "sudo cat /home/docker/cp-test_ha-438522-m03_ha-438522-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m03:/home/docker/cp-test.txt ha-438522-m04:/home/docker/cp-test_ha-438522-m03_ha-438522-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m04 "sudo cat /home/docker/cp-test_ha-438522-m03_ha-438522-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp testdata/cp-test.txt ha-438522-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3055564306/001/cp-test_ha-438522-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m04 "sudo cat /home/docker/cp-test.txt"
E0415 23:51:33.666751    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0415 23:51:33.672006    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0415 23:51:33.682982    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0415 23:51:33.703275    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0415 23:51:33.744132    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0415 23:51:33.824337    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m04:/home/docker/cp-test.txt ha-438522:/home/docker/cp-test_ha-438522-m04_ha-438522.txt
E0415 23:51:33.985443    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0415 23:51:34.305926    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522 "sudo cat /home/docker/cp-test_ha-438522-m04_ha-438522.txt"
E0415 23:51:34.946933    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m04:/home/docker/cp-test.txt ha-438522-m02:/home/docker/cp-test_ha-438522-m04_ha-438522-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m02 "sudo cat /home/docker/cp-test_ha-438522-m04_ha-438522-m02.txt"
E0415 23:51:36.227889    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 cp ha-438522-m04:/home/docker/cp-test.txt ha-438522-m03:/home/docker/cp-test_ha-438522-m04_ha-438522-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 ssh -n ha-438522-m03 "sudo cat /home/docker/cp-test_ha-438522-m04_ha-438522-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 node stop m02 -v=7 --alsologtostderr
E0415 23:51:38.789436    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0415 23:51:43.909665    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-438522 node stop m02 -v=7 --alsologtostderr: (10.94872038s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr: exit status 7 (772.150531ms)

                                                
                                                
-- stdout --
	ha-438522
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438522-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-438522-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438522-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 23:51:48.488551   68360 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:51:48.488668   68360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:51:48.488678   68360 out.go:304] Setting ErrFile to fd 2...
	I0415 23:51:48.488683   68360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:51:48.488933   68360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0415 23:51:48.489130   68360 out.go:298] Setting JSON to false
	I0415 23:51:48.489164   68360 mustload.go:65] Loading cluster: ha-438522
	I0415 23:51:48.489202   68360 notify.go:220] Checking for updates...
	I0415 23:51:48.489605   68360 config.go:182] Loaded profile config "ha-438522": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 23:51:48.489620   68360 status.go:255] checking status of ha-438522 ...
	I0415 23:51:48.490151   68360 cli_runner.go:164] Run: docker container inspect ha-438522 --format={{.State.Status}}
	I0415 23:51:48.507744   68360 status.go:330] ha-438522 host status = "Running" (err=<nil>)
	I0415 23:51:48.507768   68360 host.go:66] Checking if "ha-438522" exists ...
	I0415 23:51:48.508066   68360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-438522
	I0415 23:51:48.527419   68360 host.go:66] Checking if "ha-438522" exists ...
	I0415 23:51:48.527763   68360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 23:51:48.527816   68360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-438522
	I0415 23:51:48.557153   68360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/ha-438522/id_rsa Username:docker}
	I0415 23:51:48.658033   68360 ssh_runner.go:195] Run: systemctl --version
	I0415 23:51:48.662513   68360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:51:48.675739   68360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0415 23:51:48.731245   68360 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:72 SystemTime:2024-04-15 23:51:48.721529152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0415 23:51:48.731837   68360 kubeconfig.go:125] found "ha-438522" server: "https://192.168.49.254:8443"
	I0415 23:51:48.731870   68360 api_server.go:166] Checking apiserver status ...
	I0415 23:51:48.731915   68360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 23:51:48.744334   68360 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2192/cgroup
	I0415 23:51:48.754063   68360 api_server.go:182] apiserver freezer: "13:freezer:/docker/26189f61cce8a350124f06251a76ce01146887b0f2ee13ea71dcee932d097ff3/kubepods/burstable/podfcebe86fd1ec66081e9bd3d45254dca9/5aef4cd5ad47c12af6af0ff110114f7fc47c34a63e7898157e05864780cc84b4"
	I0415 23:51:48.754143   68360 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/26189f61cce8a350124f06251a76ce01146887b0f2ee13ea71dcee932d097ff3/kubepods/burstable/podfcebe86fd1ec66081e9bd3d45254dca9/5aef4cd5ad47c12af6af0ff110114f7fc47c34a63e7898157e05864780cc84b4/freezer.state
	I0415 23:51:48.763072   68360 api_server.go:204] freezer state: "THAWED"
	I0415 23:51:48.763107   68360 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0415 23:51:48.771131   68360 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0415 23:51:48.771162   68360 status.go:422] ha-438522 apiserver status = Running (err=<nil>)
	I0415 23:51:48.771174   68360 status.go:257] ha-438522 status: &{Name:ha-438522 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 23:51:48.771192   68360 status.go:255] checking status of ha-438522-m02 ...
	I0415 23:51:48.771626   68360 cli_runner.go:164] Run: docker container inspect ha-438522-m02 --format={{.State.Status}}
	I0415 23:51:48.798414   68360 status.go:330] ha-438522-m02 host status = "Stopped" (err=<nil>)
	I0415 23:51:48.798446   68360 status.go:343] host is not running, skipping remaining checks
	I0415 23:51:48.798454   68360 status.go:257] ha-438522-m02 status: &{Name:ha-438522-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 23:51:48.798473   68360 status.go:255] checking status of ha-438522-m03 ...
	I0415 23:51:48.798776   68360 cli_runner.go:164] Run: docker container inspect ha-438522-m03 --format={{.State.Status}}
	I0415 23:51:48.814501   68360 status.go:330] ha-438522-m03 host status = "Running" (err=<nil>)
	I0415 23:51:48.814528   68360 host.go:66] Checking if "ha-438522-m03" exists ...
	I0415 23:51:48.814851   68360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-438522-m03
	I0415 23:51:48.835673   68360 host.go:66] Checking if "ha-438522-m03" exists ...
	I0415 23:51:48.835978   68360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 23:51:48.837312   68360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-438522-m03
	I0415 23:51:48.858649   68360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/ha-438522-m03/id_rsa Username:docker}
	I0415 23:51:48.964681   68360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:51:48.982351   68360 kubeconfig.go:125] found "ha-438522" server: "https://192.168.49.254:8443"
	I0415 23:51:48.982384   68360 api_server.go:166] Checking apiserver status ...
	I0415 23:51:48.982431   68360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 23:51:48.995225   68360 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2028/cgroup
	I0415 23:51:49.005086   68360 api_server.go:182] apiserver freezer: "13:freezer:/docker/a0957ebac13f95c26b4f1bfb7b94be08dc1b2c0553b7c802d61c9da2a5ee9495/kubepods/burstable/pod0a9637130091858dbcea9836bfe6c65a/7249e62a4f9e7d6454a79eb3140d228ecab520d1f0531e798d2c32913efdf59c"
	I0415 23:51:49.005219   68360 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a0957ebac13f95c26b4f1bfb7b94be08dc1b2c0553b7c802d61c9da2a5ee9495/kubepods/burstable/pod0a9637130091858dbcea9836bfe6c65a/7249e62a4f9e7d6454a79eb3140d228ecab520d1f0531e798d2c32913efdf59c/freezer.state
	I0415 23:51:49.016451   68360 api_server.go:204] freezer state: "THAWED"
	I0415 23:51:49.016483   68360 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0415 23:51:49.024336   68360 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0415 23:51:49.024363   68360 status.go:422] ha-438522-m03 apiserver status = Running (err=<nil>)
	I0415 23:51:49.024374   68360 status.go:257] ha-438522-m03 status: &{Name:ha-438522-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 23:51:49.024391   68360 status.go:255] checking status of ha-438522-m04 ...
	I0415 23:51:49.024715   68360 cli_runner.go:164] Run: docker container inspect ha-438522-m04 --format={{.State.Status}}
	I0415 23:51:49.041936   68360 status.go:330] ha-438522-m04 host status = "Running" (err=<nil>)
	I0415 23:51:49.041957   68360 host.go:66] Checking if "ha-438522-m04" exists ...
	I0415 23:51:49.042257   68360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-438522-m04
	I0415 23:51:49.060477   68360 host.go:66] Checking if "ha-438522-m04" exists ...
	I0415 23:51:49.060773   68360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0415 23:51:49.060823   68360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-438522-m04
	I0415 23:51:49.077755   68360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/ha-438522-m04/id_rsa Username:docker}
	I0415 23:51:49.176298   68360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:51:49.188412   68360 status.go:257] ha-438522-m04 status: &{Name:ha-438522-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (63.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 node start m02 -v=7 --alsologtostderr
E0415 23:51:54.150643    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0415 23:52:14.631286    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-438522 node start m02 -v=7 --alsologtostderr: (1m2.714623973s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr: (1.011668102s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (63.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (247.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-438522 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-438522 -v=7 --alsologtostderr
E0415 23:52:55.591595    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-438522 -v=7 --alsologtostderr: (34.195121289s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-438522 --wait=true -v=7 --alsologtostderr
E0415 23:54:17.512119    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0415 23:55:18.834143    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0415 23:56:33.667263    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0415 23:57:01.353211    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-438522 --wait=true -v=7 --alsologtostderr: (3m33.101783465s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-438522
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (247.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-438522 node delete m03 -v=7 --alsologtostderr: (11.553694196s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-438522 stop -v=7 --alsologtostderr: (32.814709812s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr: exit status 7 (105.136792ms)

                                                
                                                
-- stdout --
	ha-438522
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-438522-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-438522-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 23:57:47.832490   94663 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:57:47.832628   94663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:57:47.832639   94663 out.go:304] Setting ErrFile to fd 2...
	I0415 23:57:47.832644   94663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:57:47.832873   94663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0415 23:57:47.833046   94663 out.go:298] Setting JSON to false
	I0415 23:57:47.833080   94663 mustload.go:65] Loading cluster: ha-438522
	I0415 23:57:47.833180   94663 notify.go:220] Checking for updates...
	I0415 23:57:47.833489   94663 config.go:182] Loaded profile config "ha-438522": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0415 23:57:47.833508   94663 status.go:255] checking status of ha-438522 ...
	I0415 23:57:47.833981   94663 cli_runner.go:164] Run: docker container inspect ha-438522 --format={{.State.Status}}
	I0415 23:57:47.847522   94663 status.go:330] ha-438522 host status = "Stopped" (err=<nil>)
	I0415 23:57:47.847545   94663 status.go:343] host is not running, skipping remaining checks
	I0415 23:57:47.847552   94663 status.go:257] ha-438522 status: &{Name:ha-438522 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 23:57:47.847574   94663 status.go:255] checking status of ha-438522-m02 ...
	I0415 23:57:47.847877   94663 cli_runner.go:164] Run: docker container inspect ha-438522-m02 --format={{.State.Status}}
	I0415 23:57:47.863262   94663 status.go:330] ha-438522-m02 host status = "Stopped" (err=<nil>)
	I0415 23:57:47.863288   94663 status.go:343] host is not running, skipping remaining checks
	I0415 23:57:47.863295   94663 status.go:257] ha-438522-m02 status: &{Name:ha-438522-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0415 23:57:47.863316   94663 status.go:255] checking status of ha-438522-m04 ...
	I0415 23:57:47.863650   94663 cli_runner.go:164] Run: docker container inspect ha-438522-m04 --format={{.State.Status}}
	I0415 23:57:47.877105   94663 status.go:330] ha-438522-m04 host status = "Stopped" (err=<nil>)
	I0415 23:57:47.877127   94663 status.go:343] host is not running, skipping remaining checks
	I0415 23:57:47.877135   94663 status.go:257] ha-438522-m04 status: &{Name:ha-438522-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (152.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-438522 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0416 00:00:18.834160    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-438522 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m31.070556183s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (152.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-438522 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-438522 --control-plane -v=7 --alsologtostderr: (46.600776604s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-438522 status -v=7 --alsologtostderr: (1.154993124s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-304306 --driver=docker  --container-runtime=docker
E0416 00:01:33.667047    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0416 00:01:41.889989    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-304306 --driver=docker  --container-runtime=docker: (34.819987163s)
--- PASS: TestImageBuild/serial/Setup (34.82s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-304306
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-304306: (2.041801159s)
--- PASS: TestImageBuild/serial/NormalBuild (2.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-304306
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-304306: (1.025495245s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-304306
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-304306
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-078613 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-078613 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m15.441956157s)
--- PASS: TestJSONOutput/start/Command (75.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-078613 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-078613 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-078613 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-078613 --output=json --user=testUser: (5.90129885s)
--- PASS: TestJSONOutput/stop/Command (5.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-652976 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-652976 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.91619ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ed8608d1-cd84-49e9-904d-3f97b85ed1de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-652976] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0f62d4a-db1d-4fb9-a3dd-4ab8aec93f3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18647"}}
	{"specversion":"1.0","id":"3c4278f1-b97d-4323-b8c0-39b337b89b9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"05743edc-3847-4b8c-b5dd-c59422c479a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig"}}
	{"specversion":"1.0","id":"2359cdde-8154-4e4c-89f9-b8bfee614cc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube"}}
	{"specversion":"1.0","id":"f1ae52f3-1533-4db2-a835-9c929f20f972","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"cf80a17f-d02e-4198-b80b-2b27912eab15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"be0505c7-8ac0-45f5-ad12-a050420f430a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-652976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-652976
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-391955 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-391955 --network=: (35.223348187s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-391955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-391955
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-391955: (2.192348822s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.44s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-910519 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-910519 --network=bridge: (29.315675268s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-910519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-910519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-910519: (2.032406685s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.37s)

                                                
                                    
x
+
TestKicExistingNetwork (32.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-507507 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-507507 --network=existing-network: (30.485419245s)
helpers_test.go:175: Cleaning up "existing-network-507507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-507507
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-507507: (2.0757808s)
--- PASS: TestKicExistingNetwork (32.70s)

                                                
                                    
x
+
TestKicCustomSubnet (36.02s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-085586 --subnet=192.168.60.0/24
E0416 00:05:18.835311    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-085586 --subnet=192.168.60.0/24: (33.946118265s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-085586 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-085586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-085586
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-085586: (2.055546763s)
--- PASS: TestKicCustomSubnet (36.02s)

                                                
                                    
x
+
TestKicStaticIP (36.47s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-524029 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-524029 --static-ip=192.168.200.200: (34.117555448s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-524029 ip
helpers_test.go:175: Cleaning up "static-ip-524029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-524029
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-524029: (2.171120876s)
--- PASS: TestKicStaticIP (36.47s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (75.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-110707 --driver=docker  --container-runtime=docker
E0416 00:06:33.667034    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-110707 --driver=docker  --container-runtime=docker: (32.522153684s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-113645 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-113645 --driver=docker  --container-runtime=docker: (37.598409492s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-110707
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-113645
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-113645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-113645
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-113645: (2.144584184s)
helpers_test.go:175: Cleaning up "first-110707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-110707
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-110707: (2.197934614s)
--- PASS: TestMinikubeProfile (75.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-654045 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-654045 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.396312658s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-654045 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-666631 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-666631 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.632916084s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-666631 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-654045 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-654045 --alsologtostderr -v=5: (1.488315873s)
--- PASS: TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-666631 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-666631
E0416 00:07:56.714252    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-666631: (1.216690378s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-666631
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-666631: (7.603705564s)
--- PASS: TestMountStart/serial/RestartStopped (8.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-666631 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-952975 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-952975 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m16.380067868s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (38.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-952975 -- rollout status deployment/busybox: (3.850877618s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- exec busybox-7fdf7869d9-j26cp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- exec busybox-7fdf7869d9-j7mjm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- exec busybox-7fdf7869d9-j26cp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- exec busybox-7fdf7869d9-j7mjm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- exec busybox-7fdf7869d9-j26cp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- exec busybox-7fdf7869d9-j7mjm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (38.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- exec busybox-7fdf7869d9-j26cp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- exec busybox-7fdf7869d9-j26cp -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- exec busybox-7fdf7869d9-j7mjm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-952975 -- exec busybox-7fdf7869d9-j7mjm -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.15s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-952975 -v 3 --alsologtostderr
E0416 00:10:18.833590    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-952975 -v 3 --alsologtostderr: (17.923061056s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-952975 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp testdata/cp-test.txt multinode-952975:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp multinode-952975:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2599619176/001/cp-test_multinode-952975.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp multinode-952975:/home/docker/cp-test.txt multinode-952975-m02:/home/docker/cp-test_multinode-952975_multinode-952975-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m02 "sudo cat /home/docker/cp-test_multinode-952975_multinode-952975-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp multinode-952975:/home/docker/cp-test.txt multinode-952975-m03:/home/docker/cp-test_multinode-952975_multinode-952975-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m03 "sudo cat /home/docker/cp-test_multinode-952975_multinode-952975-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp testdata/cp-test.txt multinode-952975-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp multinode-952975-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2599619176/001/cp-test_multinode-952975-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp multinode-952975-m02:/home/docker/cp-test.txt multinode-952975:/home/docker/cp-test_multinode-952975-m02_multinode-952975.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975 "sudo cat /home/docker/cp-test_multinode-952975-m02_multinode-952975.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp multinode-952975-m02:/home/docker/cp-test.txt multinode-952975-m03:/home/docker/cp-test_multinode-952975-m02_multinode-952975-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m03 "sudo cat /home/docker/cp-test_multinode-952975-m02_multinode-952975-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp testdata/cp-test.txt multinode-952975-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp multinode-952975-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2599619176/001/cp-test_multinode-952975-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp multinode-952975-m03:/home/docker/cp-test.txt multinode-952975:/home/docker/cp-test_multinode-952975-m03_multinode-952975.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975 "sudo cat /home/docker/cp-test_multinode-952975-m03_multinode-952975.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 cp multinode-952975-m03:/home/docker/cp-test.txt multinode-952975-m02:/home/docker/cp-test_multinode-952975-m03_multinode-952975-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 ssh -n multinode-952975-m02 "sudo cat /home/docker/cp-test_multinode-952975-m03_multinode-952975-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-952975 node stop m03: (1.255074538s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-952975 status: exit status 7 (560.09253ms)

                                                
                                                
-- stdout --
	multinode-952975
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-952975-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-952975-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-952975 status --alsologtostderr: exit status 7 (550.550634ms)

                                                
                                                
-- stdout --
	multinode-952975
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-952975-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-952975-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:10:36.420408  165768 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:10:36.420529  165768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:10:36.420538  165768 out.go:304] Setting ErrFile to fd 2...
	I0416 00:10:36.420543  165768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:10:36.420791  165768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0416 00:10:36.421029  165768 out.go:298] Setting JSON to false
	I0416 00:10:36.421065  165768 mustload.go:65] Loading cluster: multinode-952975
	I0416 00:10:36.421184  165768 notify.go:220] Checking for updates...
	I0416 00:10:36.421508  165768 config.go:182] Loaded profile config "multinode-952975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 00:10:36.421519  165768 status.go:255] checking status of multinode-952975 ...
	I0416 00:10:36.422277  165768 cli_runner.go:164] Run: docker container inspect multinode-952975 --format={{.State.Status}}
	I0416 00:10:36.438285  165768 status.go:330] multinode-952975 host status = "Running" (err=<nil>)
	I0416 00:10:36.438310  165768 host.go:66] Checking if "multinode-952975" exists ...
	I0416 00:10:36.438684  165768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-952975
	I0416 00:10:36.455073  165768 host.go:66] Checking if "multinode-952975" exists ...
	I0416 00:10:36.455499  165768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:10:36.455555  165768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-952975
	I0416 00:10:36.480709  165768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32912 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/multinode-952975/id_rsa Username:docker}
	I0416 00:10:36.580868  165768 ssh_runner.go:195] Run: systemctl --version
	I0416 00:10:36.585266  165768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:10:36.599319  165768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0416 00:10:36.660104  165768 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-16 00:10:36.65050869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1057-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0416 00:10:36.660779  165768 kubeconfig.go:125] found "multinode-952975" server: "https://192.168.58.2:8443"
	I0416 00:10:36.660815  165768 api_server.go:166] Checking apiserver status ...
	I0416 00:10:36.660862  165768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:10:36.672357  165768 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2064/cgroup
	I0416 00:10:36.681976  165768 api_server.go:182] apiserver freezer: "13:freezer:/docker/9640c0a848e4c75d70e7693ebff038f2f0cff8b1316da45dacc16117331792b0/kubepods/burstable/pod32d33f8568a874745f603baaa420772c/66daf29e489b51e9711492c7be0adff763b64bb63621a3c15c330e20123138ff"
	I0416 00:10:36.682054  165768 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9640c0a848e4c75d70e7693ebff038f2f0cff8b1316da45dacc16117331792b0/kubepods/burstable/pod32d33f8568a874745f603baaa420772c/66daf29e489b51e9711492c7be0adff763b64bb63621a3c15c330e20123138ff/freezer.state
	I0416 00:10:36.691147  165768 api_server.go:204] freezer state: "THAWED"
	I0416 00:10:36.691175  165768 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0416 00:10:36.698892  165768 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0416 00:10:36.698923  165768 status.go:422] multinode-952975 apiserver status = Running (err=<nil>)
	I0416 00:10:36.698933  165768 status.go:257] multinode-952975 status: &{Name:multinode-952975 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:10:36.698977  165768 status.go:255] checking status of multinode-952975-m02 ...
	I0416 00:10:36.699323  165768 cli_runner.go:164] Run: docker container inspect multinode-952975-m02 --format={{.State.Status}}
	I0416 00:10:36.719155  165768 status.go:330] multinode-952975-m02 host status = "Running" (err=<nil>)
	I0416 00:10:36.719181  165768 host.go:66] Checking if "multinode-952975-m02" exists ...
	I0416 00:10:36.719507  165768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-952975-m02
	I0416 00:10:36.737817  165768 host.go:66] Checking if "multinode-952975-m02" exists ...
	I0416 00:10:36.738133  165768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:10:36.738184  165768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-952975-m02
	I0416 00:10:36.757124  165768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/18647-2210/.minikube/machines/multinode-952975-m02/id_rsa Username:docker}
	I0416 00:10:36.861016  165768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:10:36.874333  165768 status.go:257] multinode-952975-m02 status: &{Name:multinode-952975-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:10:36.874369  165768 status.go:255] checking status of multinode-952975-m03 ...
	I0416 00:10:36.874728  165768 cli_runner.go:164] Run: docker container inspect multinode-952975-m03 --format={{.State.Status}}
	I0416 00:10:36.901746  165768 status.go:330] multinode-952975-m03 host status = "Stopped" (err=<nil>)
	I0416 00:10:36.901770  165768 status.go:343] host is not running, skipping remaining checks
	I0416 00:10:36.901777  165768 status.go:257] multinode-952975-m03 status: &{Name:multinode-952975-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-952975 node start m03 -v=7 --alsologtostderr: (10.826941253s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (91.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-952975
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-952975
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-952975: (23.01325052s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-952975 --wait=true -v=8 --alsologtostderr
E0416 00:11:33.666890    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-952975 --wait=true -v=8 --alsologtostderr: (1m8.102127585s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-952975
--- PASS: TestMultiNode/serial/RestartKeepsNodes (91.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-952975 node delete m03: (4.818595501s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-952975 stop: (21.305972195s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-952975 status: exit status 7 (105.389119ms)

                                                
                                                
-- stdout --
	multinode-952975
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-952975-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-952975 status --alsologtostderr: exit status 7 (95.862594ms)

                                                
                                                
-- stdout --
	multinode-952975
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-952975-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:12:46.805288  177908 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:12:46.805454  177908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:12:46.805474  177908 out.go:304] Setting ErrFile to fd 2...
	I0416 00:12:46.805494  177908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:12:46.805768  177908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-2210/.minikube/bin
	I0416 00:12:46.806001  177908 out.go:298] Setting JSON to false
	I0416 00:12:46.806067  177908 mustload.go:65] Loading cluster: multinode-952975
	I0416 00:12:46.806178  177908 notify.go:220] Checking for updates...
	I0416 00:12:46.806504  177908 config.go:182] Loaded profile config "multinode-952975": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0416 00:12:46.806533  177908 status.go:255] checking status of multinode-952975 ...
	I0416 00:12:46.807354  177908 cli_runner.go:164] Run: docker container inspect multinode-952975 --format={{.State.Status}}
	I0416 00:12:46.822502  177908 status.go:330] multinode-952975 host status = "Stopped" (err=<nil>)
	I0416 00:12:46.822522  177908 status.go:343] host is not running, skipping remaining checks
	I0416 00:12:46.822529  177908 status.go:257] multinode-952975 status: &{Name:multinode-952975 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:12:46.822552  177908 status.go:255] checking status of multinode-952975-m02 ...
	I0416 00:12:46.822863  177908 cli_runner.go:164] Run: docker container inspect multinode-952975-m02 --format={{.State.Status}}
	I0416 00:12:46.839912  177908 status.go:330] multinode-952975-m02 host status = "Stopped" (err=<nil>)
	I0416 00:12:46.839937  177908 status.go:343] host is not running, skipping remaining checks
	I0416 00:12:46.839945  177908 status.go:257] multinode-952975-m02 status: &{Name:multinode-952975-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (61.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-952975 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-952975 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m1.170483338s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-952975 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (61.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-952975
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-952975-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-952975-m02 --driver=docker  --container-runtime=docker: exit status 14 (93.038099ms)

                                                
                                                
-- stdout --
	* [multinode-952975-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-952975-m02' is duplicated with machine name 'multinode-952975-m02' in profile 'multinode-952975'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-952975-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-952975-m03 --driver=docker  --container-runtime=docker: (32.243172176s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-952975
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-952975: exit status 80 (331.390891ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-952975 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-952975-m03 already exists in multinode-952975-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-952975-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-952975-m03: (2.150730862s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.88s)

                                                
                                    
x
+
TestPreload (112.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-909744 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0416 00:15:18.834578    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-909744 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m12.644291579s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-909744 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-909744 image pull gcr.io/k8s-minikube/busybox: (1.611270367s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-909744
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-909744: (10.960363961s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-909744 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-909744 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (24.129278299s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-909744 image list
helpers_test.go:175: Cleaning up "test-preload-909744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-909744
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-909744: (2.299547897s)
--- PASS: TestPreload (112.01s)

                                                
                                    
x
+
TestScheduledStopUnix (103.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-555551 --memory=2048 --driver=docker  --container-runtime=docker
E0416 00:16:33.666979    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-555551 --memory=2048 --driver=docker  --container-runtime=docker: (30.216748574s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-555551 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-555551 -n scheduled-stop-555551
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-555551 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-555551 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-555551 -n scheduled-stop-555551
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-555551
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-555551 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-555551
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-555551: exit status 7 (78.233183ms)

                                                
                                                
-- stdout --
	scheduled-stop-555551
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-555551 -n scheduled-stop-555551
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-555551 -n scheduled-stop-555551: exit status 7 (80.560406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-555551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-555551
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-555551: (1.679543851s)
--- PASS: TestScheduledStopUnix (103.55s)

                                                
                                    
x
+
TestSkaffold (120.4s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2595922110 version
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-421454 --memory=2600 --driver=docker  --container-runtime=docker
E0416 00:18:21.890205    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-421454 --memory=2600 --driver=docker  --container-runtime=docker: (32.980219273s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2595922110 run --minikube-profile skaffold-421454 --kube-context skaffold-421454 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2595922110 run --minikube-profile skaffold-421454 --kube-context skaffold-421454 --status-check=true --port-forward=false --interactive=false: (1m10.511405247s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7f8477c8d6-4q6fw" [c1812d08-33b4-498a-955d-1a313459379e] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003583093s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-585ff974bd-cvgg2" [1739fe02-dd00-4623-b0d0-70d164c19b8d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 6.016640426s
helpers_test.go:175: Cleaning up "skaffold-421454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-421454
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-421454: (3.107167621s)
--- PASS: TestSkaffold (120.40s)

                                                
                                    
x
+
TestInsufficientStorage (14.13s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-741712 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-741712 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.881963506s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"257f0e83-d641-4fbc-af4e-5344009530b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-741712] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"380b939c-0def-402e-a757-7a84df24dc00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18647"}}
	{"specversion":"1.0","id":"3270ef02-8d64-40ed-b246-70d8182695f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c5f23362-3d43-4069-8192-db65ea591fbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig"}}
	{"specversion":"1.0","id":"66e1f6d7-6188-4d7a-98a3-f112e312a124","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube"}}
	{"specversion":"1.0","id":"49694af8-fb14-4ca5-9c57-f7876af038bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6be792f9-9edd-4d02-8e1d-e015f1c81a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9310d739-dbad-4e77-acbe-5b2a437a3287","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"115f6810-a5ff-4b83-a0cf-6ed42a65467a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"29b54b17-4cda-4b69-b2e3-8e5fa65d2ce8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"50120372-fc66-453f-8da3-b477447ec867","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c3253714-1848-4622-9d9f-6d2659c92a64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-741712\" primary control-plane node in \"insufficient-storage-741712\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5a4cfbb-478e-4965-811d-96508227d229","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713215244-18647 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4975ac03-1ca6-4f6b-b353-ce38bf5d72a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c80b382-0e9d-44f1-9c6a-d3792a0d0fc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-741712 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-741712 --output=json --layout=cluster: exit status 7 (293.603447ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-741712","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-741712","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:20:15.785917  209989 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-741712" does not appear in /home/jenkins/minikube-integration/18647-2210/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-741712 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-741712 --output=json --layout=cluster: exit status 7 (287.862952ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-741712","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-741712","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:20:16.074495  210043 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-741712" does not appear in /home/jenkins/minikube-integration/18647-2210/kubeconfig
	E0416 00:20:16.085523  210043 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/insufficient-storage-741712/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-741712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-741712
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-741712: (1.667964533s)
--- PASS: TestInsufficientStorage (14.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2734242313 start -p running-upgrade-182415 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2734242313 start -p running-upgrade-182415 --memory=2200 --vm-driver=docker  --container-runtime=docker: (44.758745401s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-182415 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-182415 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.754005144s)
helpers_test.go:175: Cleaning up "running-upgrade-182415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-182415
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-182415: (2.348343824s)
--- PASS: TestRunningBinaryUpgrade (79.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (386s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-666104 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0416 00:27:32.298477    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-666104 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (57.409580421s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-666104
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-666104: (10.871158368s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-666104 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-666104 status --format={{.Host}}: exit status 7 (81.148674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-666104 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-666104 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m49.176062621s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-666104 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-666104 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-666104 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (139.501826ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-666104] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-666104
	    minikube start -p kubernetes-upgrade-666104 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6661042 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-666104 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-666104 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-666104 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.621392677s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-666104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-666104
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-666104: (2.607553968s)
--- PASS: TestKubernetesUpgrade (386.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (118.14s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3265823494 start -p missing-upgrade-593437 --memory=2200 --driver=docker  --container-runtime=docker
E0416 00:26:10.377801    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3265823494 start -p missing-upgrade-593437 --memory=2200 --driver=docker  --container-runtime=docker: (40.140728251s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-593437
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-593437: (10.437909104s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-593437
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-593437 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0416 00:26:33.666821    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-593437 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.578870726s)
helpers_test.go:175: Cleaning up "missing-upgrade-593437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-593437
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-593437: (2.218961948s)
--- PASS: TestMissingContainerUpgrade (118.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-568121 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-568121 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (105.783668ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-568121] minikube v1.33.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-2210/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-2210/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-568121 --driver=docker  --container-runtime=docker
E0416 00:20:18.834101    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-568121 --driver=docker  --container-runtime=docker: (44.7984736s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-568121 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-568121 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-568121 --no-kubernetes --driver=docker  --container-runtime=docker: (15.649983727s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-568121 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-568121 status -o json: exit status 2 (387.639841ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-568121","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-568121
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-568121: (1.91231526s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-568121 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-568121 --no-kubernetes --driver=docker  --container-runtime=docker: (8.562506883s)
--- PASS: TestNoKubernetes/serial/Start (8.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-568121 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-568121 "sudo systemctl is-active --quiet service kubelet": exit status 1 (423.18822ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-568121
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-568121: (1.320413913s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-568121 --driver=docker  --container-runtime=docker
E0416 00:21:33.666660    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-568121 --driver=docker  --container-runtime=docker: (8.745382683s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-568121 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-568121 "sudo systemctl is-active --quiet service kubelet": exit status 1 (327.025267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (117.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.203940762 start -p stopped-upgrade-813793 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0416 00:24:36.714461    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0416 00:24:48.452440    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:24:48.457688    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:24:48.467910    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:24:48.488102    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:24:48.528354    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:24:48.608598    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:24:48.768798    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.203940762 start -p stopped-upgrade-813793 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m12.046125132s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.203940762 -p stopped-upgrade-813793 stop
E0416 00:24:49.089679    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:24:49.730577    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:24:51.013518    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:24:53.575275    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:24:58.695511    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.203940762 -p stopped-upgrade-813793 stop: (10.926793585s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-813793 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0416 00:25:08.936154    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:25:18.834283    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0416 00:25:29.417207    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-813793 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.421354003s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (117.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-813793
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-813793: (1.351525323s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                    
x
+
TestPause/serial/Start (52.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-645932 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-645932 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (52.530730187s)
--- PASS: TestPause/serial/Start (52.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.93s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-645932 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0416 00:29:48.451625    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:30:16.139653    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:30:18.834328    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-645932 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.911960406s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.93s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-645932 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-645932 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-645932 --output=json --layout=cluster: exit status 2 (323.569401ms)

                                                
                                                
-- stdout --
	{"Name":"pause-645932","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-645932","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-645932 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-645932 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-645932 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-645932 --alsologtostderr -v=5: (2.234387965s)
--- PASS: TestPause/serial/DeletePaused (2.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-645932
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-645932: exit status 1 (14.469231ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-645932: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (50.344997735s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-128493 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-128493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tzhn8" [afe95fa2-e871-493f-b688-a3154d4e39ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tzhn8" [afe95fa2-e871-493f-b688-a3154d4e39ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004098266s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-128493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m9.535108803s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (55.695466341s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gsx2g" [4dd30cf0-a258-4826-8655-006d60d3011c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006151565s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-128493 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-128493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m9hdf" [8ec6ece0-00e2-4e8b-b202-3ea0eff09306] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m9hdf" [8ec6ece0-00e2-4e8b-b202-3ea0eff09306] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004873873s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-128493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m15.705304781s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (29.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5zktj" [84fe0cda-68ef-4958-9a70-b391907b166a] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-5zktj" [84fe0cda-68ef-4958-9a70-b391907b166a] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-5zktj" [84fe0cda-68ef-4958-9a70-b391907b166a] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-5zktj" [84fe0cda-68ef-4958-9a70-b391907b166a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-5zktj" [84fe0cda-68ef-4958-9a70-b391907b166a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-5zktj" [84fe0cda-68ef-4958-9a70-b391907b166a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 29.015502229s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (29.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-128493 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-128493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2hdd7" [f8a5780e-d6b2-4e87-9c39-403799b01233] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2hdd7" [f8a5780e-d6b2-4e87-9c39-403799b01233] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004303143s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-128493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-128493 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-128493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kqbxb" [1e0c0361-77f9-41ac-9734-de632661c08d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kqbxb" [1e0c0361-77f9-41ac-9734-de632661c08d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00412606s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (58.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (58.495206664s)
--- PASS: TestNetworkPlugins/group/false/Start (58.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-128493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (55.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (55.727947628s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (55.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-128493 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-128493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rfhkv" [6ef6a36c-ea34-4ce9-a3da-196febc61503] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rfhkv" [6ef6a36c-ea34-4ce9-a3da-196febc61503] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003269207s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-128493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-128493 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-128493 replace --force -f testdata/netcat-deployment.yaml
E0416 00:36:41.243834    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/auto-128493/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x9vgw" [18e61603-f734-4409-bf92-e27ff817d747] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x9vgw" [18e61603-f734-4409-bf92-e27ff817d747] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.018856051s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m15.104279935s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-128493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0416 00:37:42.685436    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/auto-128493/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m36.380367427s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ctdkk" [f929e783-0d8b-4453-9b5e-8fe43dcd2e83] Running
E0416 00:38:03.607960    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:38:03.613314    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:38:03.623542    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:38:03.644495    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004428795s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-128493 "pgrep -a kubelet"
E0416 00:38:03.685359    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:38:03.765728    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:38:03.926469    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-128493 replace --force -f testdata/netcat-deployment.yaml
E0416 00:38:04.251334    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pb5lm" [cb63ef57-b790-4aef-bbb5-f9e2592cca33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0416 00:38:04.892188    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:38:06.172532    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-pb5lm" [cb63ef57-b790-4aef-bbb5-f9e2592cca33] Running
E0416 00:38:08.733165    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003814117s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-128493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0416 00:38:13.854340    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (88.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0416 00:38:44.575340    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-128493 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m28.352090467s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (88.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-128493 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-128493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p7gsn" [b099ff0a-6b41-423e-84a1-b485eeb5db37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0416 00:38:58.957395    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:38:58.962746    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:38:58.974207    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:38:58.994875    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:38:59.036411    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:38:59.117097    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:38:59.277899    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:38:59.598603    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:39:00.239768    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:39:01.520807    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:39:04.081600    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:39:04.605596    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/auto-128493/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-p7gsn" [b099ff0a-6b41-423e-84a1-b485eeb5db37] Running
E0416 00:39:09.201869    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004368322s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-128493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-014065 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0416 00:39:39.922300    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:39:48.452248    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-014065 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m44.53493516s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-128493 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-128493 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kb7x7" [5a9d98c9-35a4-4688-b659-810a0ff1561e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0416 00:40:06.357838    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:40:06.363189    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:40:06.373487    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:40:06.393928    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:40:06.434295    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:40:06.514571    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:40:06.675134    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:40:06.996251    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:40:07.637122    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:40:08.917570    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-kb7x7" [5a9d98c9-35a4-4688-b659-810a0ff1561e] Running
E0416 00:40:11.477876    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.004443115s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-128493 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-128493 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)
E0416 00:53:57.943024    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:53:58.956829    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:54:21.430006    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:54:26.659371    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:54:48.451964    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:55:01.964627    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:55:05.392700    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:55:06.358001    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:55:18.833934    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0416 00:55:22.003640    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-140300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0-rc.2
E0416 00:40:47.320498    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:40:47.457528    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:41:07.742791    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:07.748019    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:07.758270    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:07.778506    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:07.818724    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:07.898957    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:08.059828    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:08.380717    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:09.021276    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:10.301449    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:11.500382    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:41:12.861644    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:16.715133    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0416 00:41:17.982080    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:20.762501    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/auto-128493/client.crt: no such file or directory
E0416 00:41:28.223158    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:28.280633    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:41:33.667131    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-140300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0-rc.2: (59.315909433s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-140300 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [69371eaa-500d-4094-b37a-a861ffa52d0e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [69371eaa-500d-4094-b37a-a861ffa52d0e] Running
E0416 00:41:41.380920    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:41:41.386172    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:41:41.396789    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:41:41.417052    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:41:41.457324    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:41:41.537572    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:41:41.698292    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:41:42.019097    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:41:42.659845    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:41:42.803075    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
E0416 00:41:43.940454    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003329404s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-140300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-140300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0416 00:41:46.500976    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-140300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.054725071s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-140300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-140300 --alsologtostderr -v=3
E0416 00:41:48.445769    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/auto-128493/client.crt: no such file or directory
E0416 00:41:48.704097    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:41:51.621954    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-140300 --alsologtostderr -v=3: (10.849652541s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-140300 -n no-preload-140300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-140300 -n no-preload-140300: exit status 7 (83.765451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-140300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-140300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0-rc.2
E0416 00:42:01.862745    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-140300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0-rc.2: (4m25.999373269s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-140300 -n no-preload-140300
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-014065 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e11cd293-9fc9-48db-8c5c-48aeb98208a4] Pending
helpers_test.go:344: "busybox" [e11cd293-9fc9-48db-8c5c-48aeb98208a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e11cd293-9fc9-48db-8c5c-48aeb98208a4] Running
E0416 00:42:22.343351    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004348887s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-014065 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-014065 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-014065 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.568485758s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-014065 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-014065 --alsologtostderr -v=3
E0416 00:42:29.664373    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-014065 --alsologtostderr -v=3: (11.247823655s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-014065 -n old-k8s-version-014065
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-014065 -n old-k8s-version-014065: exit status 7 (85.965996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-014065 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-s7fhj" [246a13c2-bb70-4477-a1a7-fb9043d37b33] Running
E0416 00:46:27.314133    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00394297s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-s7fhj" [246a13c2-bb70-4477-a1a7-fb9043d37b33] Running
E0416 00:46:33.667270    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0416 00:46:35.426655    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004905763s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-140300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-140300 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-140300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-140300 -n no-preload-140300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-140300 -n no-preload-140300: exit status 2 (325.530698ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-140300 -n no-preload-140300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-140300 -n no-preload-140300: exit status 2 (369.018135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-140300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-140300 -n no-preload-140300
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-140300 -n no-preload-140300
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-534050 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.3
E0416 00:46:41.801156    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:47:09.065180    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-534050 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.3: (49.739157358s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-534050 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [219c39fc-00e5-4d16-acce-27789276b4c0] Pending
helpers_test.go:344: "busybox" [219c39fc-00e5-4d16-acce-27789276b4c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [219c39fc-00e5-4d16-acce-27789276b4c0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004590927s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-534050 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-534050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-534050 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051875155s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-534050 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-534050 --alsologtostderr -v=3
E0416 00:47:49.234351    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-534050 --alsologtostderr -v=3: (11.027732546s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-534050 -n embed-certs-534050
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-534050 -n embed-certs-534050: exit status 7 (85.729145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-534050 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-534050 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.3
E0416 00:47:57.645390    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:48:03.607308    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
E0416 00:48:25.328556    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-534050 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.3: (4m26.342233237s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-534050 -n embed-certs-534050
E0416 00:52:18.549523    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:52:18.751828    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gh7qv" [b9c66891-8ae4-4716-86ba-f4ea16918c05] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004432911s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gh7qv" [b9c66891-8ae4-4716-86ba-f4ea16918c05] Running
E0416 00:48:57.942830    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:48:58.957010    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/calico-128493/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004438407s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-014065 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-014065 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-014065 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-014065 -n old-k8s-version-014065
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-014065 -n old-k8s-version-014065: exit status 2 (361.486722ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-014065 -n old-k8s-version-014065
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-014065 -n old-k8s-version-014065: exit status 2 (322.639566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-014065 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-014065 -n old-k8s-version-014065
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-014065 -n old-k8s-version-014065
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-638708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.3
E0416 00:49:25.641704    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/bridge-128493/client.crt: no such file or directory
E0416 00:49:48.451857    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/skaffold-421454/client.crt: no such file or directory
E0416 00:50:05.392468    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
E0416 00:50:06.357840    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/custom-flannel-128493/client.crt: no such file or directory
E0416 00:50:18.833677    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0416 00:50:33.074842    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kubenet-128493/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-638708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.3: (1m26.821123374s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-638708 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [57d25bd3-181c-44df-8249-2563946980b8] Pending
helpers_test.go:344: "busybox" [57d25bd3-181c-44df-8249-2563946980b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [57d25bd3-181c-44df-8249-2563946980b8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004106005s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-638708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-638708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-638708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.076512383s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-638708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-638708 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-638708 --alsologtostderr -v=3: (11.000987169s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-638708 -n default-k8s-diff-port-638708
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-638708 -n default-k8s-diff-port-638708: exit status 7 (78.003748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-638708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-638708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.3
E0416 00:51:07.742735    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/false-128493/client.crt: no such file or directory
E0416 00:51:20.762784    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/auto-128493/client.crt: no such file or directory
E0416 00:51:33.666781    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/functional-673373/client.crt: no such file or directory
E0416 00:51:37.586667    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:37.591921    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:37.602140    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:37.622414    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:37.662768    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:37.743025    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:37.903383    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:38.224178    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:38.865038    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:40.145516    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:41.381630    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/enable-default-cni-128493/client.crt: no such file or directory
E0416 00:51:41.892284    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/addons-716538/client.crt: no such file or directory
E0416 00:51:42.706167    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:47.826776    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:51:58.067501    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:52:18.114006    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:52:18.119306    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:52:18.129521    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:52:18.149859    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:52:18.190124    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:52:18.270392    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:52:18.431374    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-638708 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.3: (4m26.375075988s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-638708 -n default-k8s-diff-port-638708
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jbbdt" [0467cba4-c8a9-4c72-b758-11edbfa48b47] Running
E0416 00:52:19.392403    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:52:20.673142    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:52:23.234294    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004108491s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jbbdt" [0467cba4-c8a9-4c72-b758-11edbfa48b47] Running
E0416 00:52:28.354945    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004096146s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-534050 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-534050 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-534050 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-534050 -n embed-certs-534050
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-534050 -n embed-certs-534050: exit status 2 (336.482072ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-534050 -n embed-certs-534050
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-534050 -n embed-certs-534050: exit status 2 (322.941899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-534050 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-534050 -n embed-certs-534050
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-534050 -n embed-certs-534050
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-238000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0-rc.2
E0416 00:52:38.595245    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:52:43.806358    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/auto-128493/client.crt: no such file or directory
E0416 00:52:57.645413    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/flannel-128493/client.crt: no such file or directory
E0416 00:52:59.076333    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
E0416 00:52:59.509693    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/no-preload-140300/client.crt: no such file or directory
E0416 00:53:03.608261    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/kindnet-128493/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-238000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0-rc.2: (49.288113352s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-238000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-238000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.127179338s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-238000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-238000 --alsologtostderr -v=3: (5.774522955s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-238000 -n newest-cni-238000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-238000 -n newest-cni-238000: exit status 7 (70.964339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-238000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-238000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0-rc.2
E0416 00:53:40.043665    7563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-2210/.minikube/profiles/old-k8s-version-014065/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-238000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0-rc.2: (18.271927749s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-238000 -n newest-cni-238000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-238000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-238000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-238000 -n newest-cni-238000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-238000 -n newest-cni-238000: exit status 2 (345.338057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-238000 -n newest-cni-238000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-238000 -n newest-cni-238000: exit status 2 (333.588466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-238000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-238000 -n newest-cni-238000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-238000 -n newest-cni-238000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zdjkr" [b9a62627-5947-43c1-983a-2b3f9a599987] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003794328s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zdjkr" [b9a62627-5947-43c1-983a-2b3f9a599987] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003945613s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-638708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-638708 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-638708 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-638708 -n default-k8s-diff-port-638708
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-638708 -n default-k8s-diff-port-638708: exit status 2 (314.713064ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-638708 -n default-k8s-diff-port-638708
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-638708 -n default-k8s-diff-port-638708: exit status 2 (333.177295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-638708 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-638708 -n default-k8s-diff-port-638708
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-638708 -n default-k8s-diff-port-638708
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.83s)

                                                
                                    

Test skip (27/350)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-774235 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-774235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-774235
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-128493 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-128493" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-128493

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-128493" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128493"

                                                
                                                
----------------------- debugLogs end: cilium-128493 [took: 5.241105312s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-128493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-128493
--- SKIP: TestNetworkPlugins/group/cilium (5.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-342359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-342359
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard