Test Report: Docker_Linux_containerd_arm64 18241

                    
                      51610bcb4030010c42e994a5dfa0c2b02e4dd273:2024-03-07:33452
                    
                

Test fail (7/335)

x
+
TestAddons/parallel/Ingress (37.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-493601 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-493601 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-493601 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f8994272-886c-4c21-964e-356491f7ff16] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f8994272-886c-4c21-964e-356491f7ff16] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004069569s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-493601 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.07734008s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-493601 addons disable ingress-dns --alsologtostderr -v=1: (1.588672519s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-493601 addons disable ingress --alsologtostderr -v=1: (7.840703544s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-493601
helpers_test.go:235: (dbg) docker inspect addons-493601:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a048ae680f8230e3cc6afe8ce1d64dcd9f8ff00525bec12c14952ed9fb0881b",
	        "Created": "2024-03-07T17:36:12.882676759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-07T17:36:13.198765946Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/7a048ae680f8230e3cc6afe8ce1d64dcd9f8ff00525bec12c14952ed9fb0881b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a048ae680f8230e3cc6afe8ce1d64dcd9f8ff00525bec12c14952ed9fb0881b/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a048ae680f8230e3cc6afe8ce1d64dcd9f8ff00525bec12c14952ed9fb0881b/hosts",
	        "LogPath": "/var/lib/docker/containers/7a048ae680f8230e3cc6afe8ce1d64dcd9f8ff00525bec12c14952ed9fb0881b/7a048ae680f8230e3cc6afe8ce1d64dcd9f8ff00525bec12c14952ed9fb0881b-json.log",
	        "Name": "/addons-493601",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-493601:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-493601",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f37fd958c2602c7e710a31333846d0d99307dec42e3a5e13affd66a29c574208-init/diff:/var/lib/docker/overlay2/0779a2b4023b2ef8823e4f754756b06078299f99078b3b2bb639a1812d9ff63d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f37fd958c2602c7e710a31333846d0d99307dec42e3a5e13affd66a29c574208/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f37fd958c2602c7e710a31333846d0d99307dec42e3a5e13affd66a29c574208/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f37fd958c2602c7e710a31333846d0d99307dec42e3a5e13affd66a29c574208/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-493601",
	                "Source": "/var/lib/docker/volumes/addons-493601/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-493601",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-493601",
	                "name.minikube.sigs.k8s.io": "addons-493601",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "237434b91b7875eda8e0ca2fd93f68258e2048cddb5c244dd0e436b6a6d2bc22",
	            "SandboxKey": "/var/run/docker/netns/237434b91b78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-493601": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a048ae680f8",
	                        "addons-493601"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "3a81bd7d273d926330478d1a0bd54a8cd02c0f21dfecd5e5dbebaf533be0f87c",
	                    "EndpointID": "a270e3f03faee7ef0da0b9ec2cee7fac0b364a31d58126432874286ec2790386",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-493601",
	                        "7a048ae680f8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-493601 -n addons-493601
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-493601 logs -n 25: (1.485828616s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| delete  | -p download-only-928192                                                                     | download-only-928192   | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| delete  | -p download-only-158763                                                                     | download-only-158763   | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| delete  | -p download-only-885743                                                                     | download-only-885743   | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| delete  | -p download-only-928192                                                                     | download-only-928192   | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| start   | --download-only -p                                                                          | download-docker-699370 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC |                     |
	|         | download-docker-699370                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-699370                                                                   | download-docker-699370 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-722782   | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC |                     |
	|         | binary-mirror-722782                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33827                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-722782                                                                     | binary-mirror-722782   | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| addons  | enable dashboard -p                                                                         | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC |                     |
	|         | addons-493601                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC |                     |
	|         | addons-493601                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-493601 --wait=true                                                                | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-493601 ip                                                                            | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:38 UTC | 07 Mar 24 17:38 UTC |
	| addons  | addons-493601 addons disable                                                                | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:38 UTC | 07 Mar 24 17:38 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-493601 addons                                                                        | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:38 UTC | 07 Mar 24 17:38 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:38 UTC | 07 Mar 24 17:38 UTC |
	|         | addons-493601                                                                               |                        |         |         |                     |                     |
	| addons  | addons-493601 addons                                                                        | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:38 UTC | 07 Mar 24 17:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-493601 ssh curl -s                                                                   | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:38 UTC | 07 Mar 24 17:38 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-493601 ip                                                                            | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:38 UTC | 07 Mar 24 17:38 UTC |
	| addons  | addons-493601 addons                                                                        | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:38 UTC | 07 Mar 24 17:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:39 UTC | 07 Mar 24 17:39 UTC |
	|         | -p addons-493601                                                                            |                        |         |         |                     |                     |
	| addons  | addons-493601 addons disable                                                                | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:39 UTC | 07 Mar 24 17:39 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-493601 addons disable                                                                | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:39 UTC | 07 Mar 24 17:39 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ssh     | addons-493601 ssh cat                                                                       | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:39 UTC | 07 Mar 24 17:39 UTC |
	|         | /opt/local-path-provisioner/pvc-d88010ac-e556-4434-84f7-887264f6234e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-493601 addons disable                                                                | addons-493601          | jenkins | v1.32.0 | 07 Mar 24 17:39 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 17:35:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 17:35:49.120591  286989 out.go:291] Setting OutFile to fd 1 ...
	I0307 17:35:49.120806  286989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:35:49.120819  286989 out.go:304] Setting ErrFile to fd 2...
	I0307 17:35:49.120825  286989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:35:49.121094  286989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 17:35:49.121591  286989 out.go:298] Setting JSON to false
	I0307 17:35:49.122474  286989 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4693,"bootTime":1709828256,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 17:35:49.122539  286989 start.go:139] virtualization:  
	I0307 17:35:49.125440  286989 out.go:177] * [addons-493601] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 17:35:49.127831  286989 notify.go:220] Checking for updates...
	I0307 17:35:49.130139  286989 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 17:35:49.132132  286989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 17:35:49.134161  286989 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 17:35:49.136150  286989 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	I0307 17:35:49.138288  286989 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 17:35:49.141013  286989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 17:35:49.143093  286989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 17:35:49.163106  286989 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 17:35:49.163242  286989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:35:49.230349  286989 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 17:35:49.221401685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:35:49.230483  286989 docker.go:295] overlay module found
	I0307 17:35:49.232959  286989 out.go:177] * Using the docker driver based on user configuration
	I0307 17:35:49.235134  286989 start.go:297] selected driver: docker
	I0307 17:35:49.235152  286989 start.go:901] validating driver "docker" against <nil>
	I0307 17:35:49.235165  286989 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 17:35:49.235816  286989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:35:49.288375  286989 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 17:35:49.279650287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:35:49.288537  286989 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 17:35:49.288769  286989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 17:35:49.291212  286989 out.go:177] * Using Docker driver with root privileges
	I0307 17:35:49.293338  286989 cni.go:84] Creating CNI manager for ""
	I0307 17:35:49.293358  286989 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 17:35:49.293369  286989 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 17:35:49.293461  286989 start.go:340] cluster config:
	{Name:addons-493601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-493601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 17:35:49.296815  286989 out.go:177] * Starting "addons-493601" primary control-plane node in "addons-493601" cluster
	I0307 17:35:49.298893  286989 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 17:35:49.300797  286989 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 17:35:49.302996  286989 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 17:35:49.303052  286989 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0307 17:35:49.303065  286989 cache.go:56] Caching tarball of preloaded images
	I0307 17:35:49.303101  286989 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 17:35:49.303154  286989 preload.go:173] Found /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 17:35:49.303165  286989 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0307 17:35:49.303516  286989 profile.go:142] Saving config to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/config.json ...
	I0307 17:35:49.303586  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/config.json: {Name:mk32d0653c5e60da05b6b572af3fb6d236b0f4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:35:49.323494  286989 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 17:35:49.323652  286989 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 17:35:49.323682  286989 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 17:35:49.323691  286989 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 17:35:49.323700  286989 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 17:35:49.323708  286989 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0307 17:36:05.489048  286989 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0307 17:36:05.489087  286989 cache.go:194] Successfully downloaded all kic artifacts
	I0307 17:36:05.489117  286989 start.go:360] acquireMachinesLock for addons-493601: {Name:mke286e0a2319a2cd4c908a07453eb7816fcf5e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 17:36:05.489242  286989 start.go:364] duration metric: took 100.25µs to acquireMachinesLock for "addons-493601"
	I0307 17:36:05.489274  286989 start.go:93] Provisioning new machine with config: &{Name:addons-493601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-493601 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 17:36:05.489372  286989 start.go:125] createHost starting for "" (driver="docker")
	I0307 17:36:05.491900  286989 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0307 17:36:05.492162  286989 start.go:159] libmachine.API.Create for "addons-493601" (driver="docker")
	I0307 17:36:05.492198  286989 client.go:168] LocalClient.Create starting
	I0307 17:36:05.492330  286989 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem
	I0307 17:36:05.872655  286989 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/cert.pem
	I0307 17:36:06.261082  286989 cli_runner.go:164] Run: docker network inspect addons-493601 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 17:36:06.282992  286989 cli_runner.go:211] docker network inspect addons-493601 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 17:36:06.283085  286989 network_create.go:281] running [docker network inspect addons-493601] to gather additional debugging logs...
	I0307 17:36:06.283108  286989 cli_runner.go:164] Run: docker network inspect addons-493601
	W0307 17:36:06.297282  286989 cli_runner.go:211] docker network inspect addons-493601 returned with exit code 1
	I0307 17:36:06.297311  286989 network_create.go:284] error running [docker network inspect addons-493601]: docker network inspect addons-493601: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-493601 not found
	I0307 17:36:06.297327  286989 network_create.go:286] output of [docker network inspect addons-493601]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-493601 not found
	
	** /stderr **
	I0307 17:36:06.297417  286989 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 17:36:06.311858  286989 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025989a0}
	I0307 17:36:06.311904  286989 network_create.go:124] attempt to create docker network addons-493601 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0307 17:36:06.311957  286989 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-493601 addons-493601
	I0307 17:36:06.373458  286989 network_create.go:108] docker network addons-493601 192.168.49.0/24 created
	I0307 17:36:06.373491  286989 kic.go:121] calculated static IP "192.168.49.2" for the "addons-493601" container
	I0307 17:36:06.373801  286989 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 17:36:06.387741  286989 cli_runner.go:164] Run: docker volume create addons-493601 --label name.minikube.sigs.k8s.io=addons-493601 --label created_by.minikube.sigs.k8s.io=true
	I0307 17:36:06.404530  286989 oci.go:103] Successfully created a docker volume addons-493601
	I0307 17:36:06.404639  286989 cli_runner.go:164] Run: docker run --rm --name addons-493601-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493601 --entrypoint /usr/bin/test -v addons-493601:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 17:36:08.566123  286989 cli_runner.go:217] Completed: docker run --rm --name addons-493601-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493601 --entrypoint /usr/bin/test -v addons-493601:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (2.161443022s)
	I0307 17:36:08.566156  286989 oci.go:107] Successfully prepared a docker volume addons-493601
	I0307 17:36:08.566174  286989 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 17:36:08.566193  286989 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 17:36:08.566279  286989 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-493601:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 17:36:12.815531  286989 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-493601:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (4.249206586s)
	I0307 17:36:12.815564  286989 kic.go:203] duration metric: took 4.249367603s to extract preloaded images to volume ...
	W0307 17:36:12.815702  286989 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0307 17:36:12.815808  286989 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0307 17:36:12.869091  286989 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-493601 --name addons-493601 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-493601 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-493601 --network addons-493601 --ip 192.168.49.2 --volume addons-493601:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0307 17:36:13.207906  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Running}}
	I0307 17:36:13.227631  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:13.250662  286989 cli_runner.go:164] Run: docker exec addons-493601 stat /var/lib/dpkg/alternatives/iptables
	I0307 17:36:13.303695  286989 oci.go:144] the created container "addons-493601" has a running status.
	I0307 17:36:13.303727  286989 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa...
	I0307 17:36:14.193394  286989 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0307 17:36:14.213851  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:14.233063  286989 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0307 17:36:14.233082  286989 kic_runner.go:114] Args: [docker exec --privileged addons-493601 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0307 17:36:14.280110  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:14.299292  286989 machine.go:94] provisionDockerMachine start ...
	I0307 17:36:14.299378  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:14.319919  286989 main.go:141] libmachine: Using SSH client type: native
	I0307 17:36:14.320279  286989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0307 17:36:14.320293  286989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 17:36:14.456975  286989 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-493601
	
	I0307 17:36:14.456999  286989 ubuntu.go:169] provisioning hostname "addons-493601"
	I0307 17:36:14.457064  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:14.473165  286989 main.go:141] libmachine: Using SSH client type: native
	I0307 17:36:14.473424  286989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0307 17:36:14.473442  286989 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-493601 && echo "addons-493601" | sudo tee /etc/hostname
	I0307 17:36:14.614780  286989 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-493601
	
	I0307 17:36:14.614925  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:14.635828  286989 main.go:141] libmachine: Using SSH client type: native
	I0307 17:36:14.636081  286989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0307 17:36:14.636103  286989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-493601' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-493601/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-493601' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 17:36:14.769466  286989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 17:36:14.769496  286989 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18241-280769/.minikube CaCertPath:/home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18241-280769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18241-280769/.minikube}
	I0307 17:36:14.769547  286989 ubuntu.go:177] setting up certificates
	I0307 17:36:14.769558  286989 provision.go:84] configureAuth start
	I0307 17:36:14.769627  286989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493601
	I0307 17:36:14.785319  286989 provision.go:143] copyHostCerts
	I0307 17:36:14.785401  286989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18241-280769/.minikube/ca.pem (1078 bytes)
	I0307 17:36:14.785575  286989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18241-280769/.minikube/cert.pem (1123 bytes)
	I0307 17:36:14.785655  286989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18241-280769/.minikube/key.pem (1675 bytes)
	I0307 17:36:14.785715  286989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18241-280769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca-key.pem org=jenkins.addons-493601 san=[127.0.0.1 192.168.49.2 addons-493601 localhost minikube]
	I0307 17:36:15.366388  286989 provision.go:177] copyRemoteCerts
	I0307 17:36:15.366460  286989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 17:36:15.366502  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:15.383422  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:15.478287  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 17:36:15.502898  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 17:36:15.528183  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 17:36:15.552280  286989 provision.go:87] duration metric: took 782.693513ms to configureAuth
	I0307 17:36:15.552309  286989 ubuntu.go:193] setting minikube options for container-runtime
	I0307 17:36:15.552545  286989 config.go:182] Loaded profile config "addons-493601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 17:36:15.552557  286989 machine.go:97] duration metric: took 1.2532482s to provisionDockerMachine
	I0307 17:36:15.552565  286989 client.go:171] duration metric: took 10.060359021s to LocalClient.Create
	I0307 17:36:15.552584  286989 start.go:167] duration metric: took 10.060423964s to libmachine.API.Create "addons-493601"
	I0307 17:36:15.552601  286989 start.go:293] postStartSetup for "addons-493601" (driver="docker")
	I0307 17:36:15.552617  286989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 17:36:15.552675  286989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 17:36:15.552725  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:15.568712  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:15.662459  286989 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 17:36:15.665679  286989 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 17:36:15.665714  286989 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 17:36:15.665728  286989 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 17:36:15.665735  286989 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0307 17:36:15.665745  286989 filesync.go:126] Scanning /home/jenkins/minikube-integration/18241-280769/.minikube/addons for local assets ...
	I0307 17:36:15.665813  286989 filesync.go:126] Scanning /home/jenkins/minikube-integration/18241-280769/.minikube/files for local assets ...
	I0307 17:36:15.665842  286989 start.go:296] duration metric: took 113.235817ms for postStartSetup
	I0307 17:36:15.666159  286989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493601
	I0307 17:36:15.680865  286989 profile.go:142] Saving config to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/config.json ...
	I0307 17:36:15.681151  286989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 17:36:15.681207  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:15.696383  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:15.786439  286989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 17:36:15.790944  286989 start.go:128] duration metric: took 10.30155785s to createHost
	I0307 17:36:15.790970  286989 start.go:83] releasing machines lock for "addons-493601", held for 10.301713058s
	I0307 17:36:15.791043  286989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-493601
	I0307 17:36:15.806557  286989 ssh_runner.go:195] Run: cat /version.json
	I0307 17:36:15.806581  286989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 17:36:15.806613  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:15.806659  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:15.829782  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:15.830562  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:16.033144  286989 ssh_runner.go:195] Run: systemctl --version
	I0307 17:36:16.038191  286989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 17:36:16.042581  286989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 17:36:16.068381  286989 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 17:36:16.068457  286989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 17:36:16.097480  286989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0307 17:36:16.097626  286989 start.go:494] detecting cgroup driver to use...
	I0307 17:36:16.097676  286989 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 17:36:16.097756  286989 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 17:36:16.110271  286989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 17:36:16.121789  286989 docker.go:217] disabling cri-docker service (if available) ...
	I0307 17:36:16.121882  286989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 17:36:16.136449  286989 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 17:36:16.150892  286989 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 17:36:16.229436  286989 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 17:36:16.321455  286989 docker.go:233] disabling docker service ...
	I0307 17:36:16.321583  286989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 17:36:16.341856  286989 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 17:36:16.353843  286989 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 17:36:16.441040  286989 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 17:36:16.528183  286989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 17:36:16.540413  286989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 17:36:16.558055  286989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 17:36:16.568849  286989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 17:36:16.579383  286989 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 17:36:16.579508  286989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 17:36:16.590612  286989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 17:36:16.601135  286989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 17:36:16.611658  286989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 17:36:16.622102  286989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 17:36:16.632291  286989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 17:36:16.642704  286989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 17:36:16.651773  286989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 17:36:16.660356  286989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 17:36:16.736536  286989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 17:36:16.855111  286989 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 17:36:16.855260  286989 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 17:36:16.858936  286989 start.go:562] Will wait 60s for crictl version
	I0307 17:36:16.859051  286989 ssh_runner.go:195] Run: which crictl
	I0307 17:36:16.862567  286989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 17:36:16.898647  286989 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0307 17:36:16.898778  286989 ssh_runner.go:195] Run: containerd --version
	I0307 17:36:16.919876  286989 ssh_runner.go:195] Run: containerd --version
	I0307 17:36:16.946017  286989 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0307 17:36:16.948247  286989 cli_runner.go:164] Run: docker network inspect addons-493601 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 17:36:16.963348  286989 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0307 17:36:16.967081  286989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 17:36:16.977786  286989 kubeadm.go:877] updating cluster {Name:addons-493601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-493601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 17:36:16.977925  286989 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 17:36:16.978017  286989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 17:36:17.014522  286989 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 17:36:17.014576  286989 containerd.go:519] Images already preloaded, skipping extraction
	I0307 17:36:17.014648  286989 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 17:36:17.052893  286989 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 17:36:17.052915  286989 cache_images.go:84] Images are preloaded, skipping loading
	I0307 17:36:17.052924  286989 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0307 17:36:17.053021  286989 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-493601 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-493601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 17:36:17.053093  286989 ssh_runner.go:195] Run: sudo crictl info
	I0307 17:36:17.088551  286989 cni.go:84] Creating CNI manager for ""
	I0307 17:36:17.088577  286989 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 17:36:17.088589  286989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 17:36:17.088616  286989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-493601 NodeName:addons-493601 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 17:36:17.088761  286989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-493601"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 17:36:17.088849  286989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 17:36:17.098507  286989 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 17:36:17.098585  286989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 17:36:17.107929  286989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0307 17:36:17.125987  286989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 17:36:17.144063  286989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0307 17:36:17.162306  286989 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0307 17:36:17.166050  286989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 17:36:17.177077  286989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 17:36:17.263245  286989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 17:36:17.280454  286989 certs.go:68] Setting up /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601 for IP: 192.168.49.2
	I0307 17:36:17.280525  286989 certs.go:194] generating shared ca certs ...
	I0307 17:36:17.280556  286989 certs.go:226] acquiring lock for ca certs: {Name:mka3b4968cfa6fbc711689192ca27e019bb8f9e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:17.280715  286989 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18241-280769/.minikube/ca.key
	I0307 17:36:17.604835  286989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18241-280769/.minikube/ca.crt ...
	I0307 17:36:17.604875  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/ca.crt: {Name:mke69aed9821d79716ae99bd8590c4020e25f842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:17.605572  286989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18241-280769/.minikube/ca.key ...
	I0307 17:36:17.605589  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/ca.key: {Name:mk150822996552148c7b6ec6482f977f324fdc8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:17.605696  286989 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18241-280769/.minikube/proxy-client-ca.key
	I0307 17:36:18.040137  286989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18241-280769/.minikube/proxy-client-ca.crt ...
	I0307 17:36:18.040171  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/proxy-client-ca.crt: {Name:mkd1ec8d4133d45b545e9f0e4a3ac61df1c87ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:18.040997  286989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18241-280769/.minikube/proxy-client-ca.key ...
	I0307 17:36:18.041017  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/proxy-client-ca.key: {Name:mk72b97723bc671e722343fb9daa7ffa1d155f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:18.041715  286989 certs.go:256] generating profile certs ...
	I0307 17:36:18.041785  286989 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.key
	I0307 17:36:18.041804  286989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt with IP's: []
	I0307 17:36:18.795920  286989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt ...
	I0307 17:36:18.795955  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: {Name:mk81e0d7c049e6c14e905b8b713a3525436bfc9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:18.796147  286989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.key ...
	I0307 17:36:18.796160  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.key: {Name:mk986a3cbf5d83ac902fa73368a704438734332b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:18.796629  286989 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.key.8ac5b572
	I0307 17:36:18.796654  286989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.crt.8ac5b572 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0307 17:36:19.466618  286989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.crt.8ac5b572 ...
	I0307 17:36:19.466649  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.crt.8ac5b572: {Name:mk1ac95d1f92b3956bd15dbf9ccb2499eef5c32f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:19.467272  286989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.key.8ac5b572 ...
	I0307 17:36:19.467290  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.key.8ac5b572: {Name:mk757b47c3aa8a7259b8ff50c616c042581c716a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:19.467385  286989 certs.go:381] copying /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.crt.8ac5b572 -> /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.crt
	I0307 17:36:19.467468  286989 certs.go:385] copying /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.key.8ac5b572 -> /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.key
	I0307 17:36:19.467523  286989 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/proxy-client.key
	I0307 17:36:19.467543  286989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/proxy-client.crt with IP's: []
	I0307 17:36:19.795396  286989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/proxy-client.crt ...
	I0307 17:36:19.795430  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/proxy-client.crt: {Name:mkdca9d4c8c7933b172aea005dbdf076d4cc8b8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:19.796047  286989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/proxy-client.key ...
	I0307 17:36:19.796066  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/proxy-client.key: {Name:mkf69c2401387972371c17de2c5445eeeebdcb35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:19.796259  286989 certs.go:484] found cert: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 17:36:19.796302  286989 certs.go:484] found cert: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem (1078 bytes)
	I0307 17:36:19.796329  286989 certs.go:484] found cert: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/cert.pem (1123 bytes)
	I0307 17:36:19.796358  286989 certs.go:484] found cert: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/key.pem (1675 bytes)
	I0307 17:36:19.796938  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 17:36:19.820701  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 17:36:19.844038  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 17:36:19.867654  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 17:36:19.891096  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0307 17:36:19.914840  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 17:36:19.938267  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 17:36:19.962406  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 17:36:19.985546  286989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 17:36:20.021200  286989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 17:36:20.041115  286989 ssh_runner.go:195] Run: openssl version
	I0307 17:36:20.046736  286989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 17:36:20.056535  286989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 17:36:20.060041  286989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0307 17:36:20.060106  286989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 17:36:20.067113  286989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 17:36:20.076362  286989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 17:36:20.079445  286989 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 17:36:20.079495  286989 kubeadm.go:391] StartCluster: {Name:addons-493601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-493601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 17:36:20.079571  286989 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 17:36:20.079626  286989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 17:36:20.118077  286989 cri.go:89] found id: ""
	I0307 17:36:20.118155  286989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 17:36:20.127481  286989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 17:36:20.136311  286989 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0307 17:36:20.136399  286989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 17:36:20.145292  286989 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 17:36:20.145313  286989 kubeadm.go:156] found existing configuration files:
	
	I0307 17:36:20.145368  286989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 17:36:20.154506  286989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 17:36:20.154618  286989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 17:36:20.163339  286989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 17:36:20.171943  286989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 17:36:20.172022  286989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 17:36:20.180725  286989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 17:36:20.190098  286989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 17:36:20.190189  286989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 17:36:20.198667  286989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 17:36:20.207855  286989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 17:36:20.207953  286989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 17:36:20.216177  286989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0307 17:36:20.260955  286989 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 17:36:20.261303  286989 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 17:36:20.311766  286989 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0307 17:36:20.311840  286989 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0307 17:36:20.311880  286989 kubeadm.go:309] OS: Linux
	I0307 17:36:20.311929  286989 kubeadm.go:309] CGROUPS_CPU: enabled
	I0307 17:36:20.311979  286989 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0307 17:36:20.312031  286989 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0307 17:36:20.312081  286989 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0307 17:36:20.312129  286989 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0307 17:36:20.312179  286989 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0307 17:36:20.312226  286989 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0307 17:36:20.312277  286989 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0307 17:36:20.312327  286989 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0307 17:36:20.384709  286989 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 17:36:20.384822  286989 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 17:36:20.384919  286989 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 17:36:20.610184  286989 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 17:36:20.613817  286989 out.go:204]   - Generating certificates and keys ...
	I0307 17:36:20.613920  286989 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 17:36:20.614011  286989 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 17:36:20.768133  286989 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 17:36:21.132507  286989 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 17:36:21.756839  286989 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 17:36:22.305007  286989 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 17:36:22.770039  286989 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 17:36:22.770182  286989 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-493601 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0307 17:36:23.090940  286989 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 17:36:23.091111  286989 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-493601 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0307 17:36:23.274171  286989 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 17:36:23.566378  286989 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 17:36:24.488690  286989 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 17:36:24.488849  286989 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 17:36:25.207425  286989 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 17:36:25.555663  286989 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 17:36:26.065345  286989 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 17:36:26.763705  286989 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 17:36:26.764433  286989 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 17:36:26.768782  286989 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 17:36:26.772105  286989 out.go:204]   - Booting up control plane ...
	I0307 17:36:26.772204  286989 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 17:36:26.772279  286989 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 17:36:26.772347  286989 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 17:36:26.782965  286989 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 17:36:26.783067  286989 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 17:36:26.783115  286989 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 17:36:26.873318  286989 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 17:36:34.877949  286989 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.004705 seconds
	I0307 17:36:34.878079  286989 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 17:36:34.894015  286989 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 17:36:35.424657  286989 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 17:36:35.424852  286989 kubeadm.go:309] [mark-control-plane] Marking the node addons-493601 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 17:36:35.937229  286989 kubeadm.go:309] [bootstrap-token] Using token: k0sbmp.lie3dacwdtzpbi0i
	I0307 17:36:35.939345  286989 out.go:204]   - Configuring RBAC rules ...
	I0307 17:36:35.939470  286989 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 17:36:35.948184  286989 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 17:36:35.958453  286989 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 17:36:35.962264  286989 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 17:36:35.966672  286989 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 17:36:35.970673  286989 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 17:36:35.985411  286989 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 17:36:36.234881  286989 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 17:36:36.352413  286989 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 17:36:36.353734  286989 kubeadm.go:309] 
	I0307 17:36:36.353816  286989 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 17:36:36.353822  286989 kubeadm.go:309] 
	I0307 17:36:36.353897  286989 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 17:36:36.353901  286989 kubeadm.go:309] 
	I0307 17:36:36.353926  286989 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 17:36:36.353983  286989 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 17:36:36.354032  286989 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 17:36:36.354036  286989 kubeadm.go:309] 
	I0307 17:36:36.354088  286989 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 17:36:36.354100  286989 kubeadm.go:309] 
	I0307 17:36:36.354146  286989 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 17:36:36.354150  286989 kubeadm.go:309] 
	I0307 17:36:36.354209  286989 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 17:36:36.354282  286989 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 17:36:36.354350  286989 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 17:36:36.354354  286989 kubeadm.go:309] 
	I0307 17:36:36.354441  286989 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 17:36:36.354516  286989 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 17:36:36.354521  286989 kubeadm.go:309] 
	I0307 17:36:36.354617  286989 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token k0sbmp.lie3dacwdtzpbi0i \
	I0307 17:36:36.354717  286989 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:3b4ebe8e04b71a1913e75325f1720f08bf63a61f19ff551d4b6f5213bf81a3af \
	I0307 17:36:36.354737  286989 kubeadm.go:309] 	--control-plane 
	I0307 17:36:36.354741  286989 kubeadm.go:309] 
	I0307 17:36:36.354828  286989 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 17:36:36.354834  286989 kubeadm.go:309] 
	I0307 17:36:36.354912  286989 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token k0sbmp.lie3dacwdtzpbi0i \
	I0307 17:36:36.355010  286989 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:3b4ebe8e04b71a1913e75325f1720f08bf63a61f19ff551d4b6f5213bf81a3af 
	I0307 17:36:36.357348  286989 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0307 17:36:36.357459  286989 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 17:36:36.357474  286989 cni.go:84] Creating CNI manager for ""
	I0307 17:36:36.357482  286989 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 17:36:36.359554  286989 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 17:36:36.361363  286989 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 17:36:36.381321  286989 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0307 17:36:36.381339  286989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0307 17:36:36.407243  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 17:36:37.333101  286989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 17:36:37.333200  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:37.333268  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-493601 minikube.k8s.io/updated_at=2024_03_07T17_36_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f minikube.k8s.io/name=addons-493601 minikube.k8s.io/primary=true
	I0307 17:36:37.478590  286989 ops.go:34] apiserver oom_adj: -16
	I0307 17:36:37.479046  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:37.979355  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:38.479881  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:38.979232  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:39.479210  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:39.979284  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:40.479683  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:40.979243  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:41.479259  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:41.979861  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:42.479198  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:42.979755  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:43.479111  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:43.979193  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:44.479371  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:44.979650  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:45.479734  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:45.980089  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:46.479189  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:46.979258  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:47.479554  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:47.980157  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:48.480029  286989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 17:36:48.588718  286989 kubeadm.go:1106] duration metric: took 11.255585435s to wait for elevateKubeSystemPrivileges
	W0307 17:36:48.588749  286989 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 17:36:48.588756  286989 kubeadm.go:393] duration metric: took 28.509268094s to StartCluster
	I0307 17:36:48.588777  286989 settings.go:142] acquiring lock: {Name:mk7fc8981edba83f2165d6d3660f0909c818732a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:48.589260  286989 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 17:36:48.589703  286989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/kubeconfig: {Name:mkb730a03bcec144218b310b25ab397685c133af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 17:36:48.591076  286989 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 17:36:48.593596  286989 out.go:177] * Verifying Kubernetes components...
	I0307 17:36:48.591221  286989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 17:36:48.591409  286989 config.go:182] Loaded profile config "addons-493601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 17:36:48.591418  286989 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0307 17:36:48.595422  286989 addons.go:69] Setting yakd=true in profile "addons-493601"
	I0307 17:36:48.595448  286989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 17:36:48.595463  286989 addons.go:234] Setting addon yakd=true in "addons-493601"
	I0307 17:36:48.595494  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.595545  286989 addons.go:69] Setting ingress-dns=true in profile "addons-493601"
	I0307 17:36:48.595583  286989 addons.go:234] Setting addon ingress-dns=true in "addons-493601"
	I0307 17:36:48.595615  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.596050  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.596194  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.597571  286989 addons.go:69] Setting cloud-spanner=true in profile "addons-493601"
	I0307 17:36:48.597614  286989 addons.go:234] Setting addon cloud-spanner=true in "addons-493601"
	I0307 17:36:48.597651  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.598177  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.598580  286989 addons.go:69] Setting inspektor-gadget=true in profile "addons-493601"
	I0307 17:36:48.598610  286989 addons.go:234] Setting addon inspektor-gadget=true in "addons-493601"
	I0307 17:36:48.598650  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.599052  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.599201  286989 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-493601"
	I0307 17:36:48.599243  286989 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-493601"
	I0307 17:36:48.599275  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.599686  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.603683  286989 addons.go:69] Setting default-storageclass=true in profile "addons-493601"
	I0307 17:36:48.603750  286989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-493601"
	I0307 17:36:48.604079  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.613928  286989 addons.go:69] Setting gcp-auth=true in profile "addons-493601"
	I0307 17:36:48.613985  286989 mustload.go:65] Loading cluster: addons-493601
	I0307 17:36:48.614175  286989 config.go:182] Loaded profile config "addons-493601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 17:36:48.614435  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.614904  286989 addons.go:69] Setting metrics-server=true in profile "addons-493601"
	I0307 17:36:48.614946  286989 addons.go:234] Setting addon metrics-server=true in "addons-493601"
	I0307 17:36:48.614977  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.615384  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.626693  286989 addons.go:69] Setting ingress=true in profile "addons-493601"
	I0307 17:36:48.626740  286989 addons.go:234] Setting addon ingress=true in "addons-493601"
	I0307 17:36:48.626799  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.627260  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.632642  286989 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-493601"
	I0307 17:36:48.632687  286989 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-493601"
	I0307 17:36:48.632723  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.633175  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.645609  286989 addons.go:69] Setting registry=true in profile "addons-493601"
	I0307 17:36:48.645655  286989 addons.go:234] Setting addon registry=true in "addons-493601"
	I0307 17:36:48.645694  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.646137  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.682756  286989 addons.go:69] Setting storage-provisioner=true in profile "addons-493601"
	I0307 17:36:48.682805  286989 addons.go:234] Setting addon storage-provisioner=true in "addons-493601"
	I0307 17:36:48.682840  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.683285  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.734908  286989 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0307 17:36:48.723218  286989 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-493601"
	I0307 17:36:48.723234  286989 addons.go:69] Setting volumesnapshots=true in profile "addons-493601"
	I0307 17:36:48.749142  286989 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-493601"
	I0307 17:36:48.749253  286989 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0307 17:36:48.749281  286989 addons.go:234] Setting addon volumesnapshots=true in "addons-493601"
	I0307 17:36:48.751013  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.751481  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.751581  286989 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0307 17:36:48.762904  286989 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0307 17:36:48.762931  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0307 17:36:48.763003  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:48.751767  286989 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0307 17:36:48.752020  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.752033  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0307 17:36:48.771443  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:48.772822  286989 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 17:36:48.772840  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0307 17:36:48.772895  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:48.856210  286989 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0307 17:36:48.850834  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.852493  286989 addons.go:234] Setting addon default-storageclass=true in "addons-493601"
	I0307 17:36:48.862091  286989 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0307 17:36:48.869390  286989 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0307 17:36:48.869451  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:48.875675  286989 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0307 17:36:48.874709  286989 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0307 17:36:48.874726  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0307 17:36:48.875214  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:48.878533  286989 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 17:36:48.879386  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0307 17:36:48.879445  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:48.901453  286989 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0307 17:36:48.878611  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:48.904929  286989 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 17:36:48.904948  286989 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0307 17:36:48.961180  286989 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0307 17:36:48.963113  286989 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0307 17:36:48.965119  286989 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0307 17:36:48.972394  286989 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0307 17:36:48.981658  286989 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0307 17:36:48.974167  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:48.974185  286989 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0307 17:36:48.974196  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 17:36:48.983809  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:48.985780  286989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 17:36:48.988713  286989 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 17:36:48.988772  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 17:36:48.988868  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:49.005697  286989 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0307 17:36:48.987344  286989 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 17:36:49.011401  286989 out.go:177]   - Using image docker.io/registry:2.8.3
	I0307 17:36:49.013660  286989 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0307 17:36:49.013689  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0307 17:36:49.013768  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:49.013460  286989 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0307 17:36:49.033099  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0307 17:36:49.033177  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:49.053090  286989 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0307 17:36:49.051635  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.058457  286989 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 17:36:49.056328  286989 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0307 17:36:49.063000  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0307 17:36:49.063077  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:49.073121  286989 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 17:36:49.073146  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0307 17:36:49.073218  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:49.095242  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.102850  286989 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-493601"
	I0307 17:36:49.102897  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:49.103316  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:49.112310  286989 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 17:36:49.112335  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 17:36:49.112399  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:49.140625  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.140686  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.192910  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.227077  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.229915  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.246257  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.269864  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.272790  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.284892  286989 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0307 17:36:49.286923  286989 out.go:177]   - Using image docker.io/busybox:stable
	I0307 17:36:49.285267  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:49.289373  286989 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 17:36:49.289386  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0307 17:36:49.289442  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:49.314228  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	W0307 17:36:49.321094  286989 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0307 17:36:49.321122  286989 retry.go:31] will retry after 125.692398ms: ssh: handshake failed: EOF
	I0307 17:36:49.419558  286989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 17:36:49.419654  286989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 17:36:49.478238  286989 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0307 17:36:49.478309  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0307 17:36:49.618022  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0307 17:36:49.628844  286989 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0307 17:36:49.628874  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0307 17:36:49.752483  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 17:36:49.791242  286989 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0307 17:36:49.791278  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0307 17:36:49.817821  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 17:36:49.845622  286989 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0307 17:36:49.845699  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0307 17:36:49.849962  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 17:36:49.867955  286989 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0307 17:36:49.868031  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0307 17:36:49.884060  286989 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0307 17:36:49.884137  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0307 17:36:49.904759  286989 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0307 17:36:49.904824  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0307 17:36:49.910413  286989 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 17:36:49.910478  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0307 17:36:49.931962  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 17:36:49.951387  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 17:36:49.974381  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 17:36:49.988345  286989 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0307 17:36:49.988418  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0307 17:36:50.031937  286989 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0307 17:36:50.032022  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0307 17:36:50.066341  286989 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0307 17:36:50.066428  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0307 17:36:50.090115  286989 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0307 17:36:50.090222  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0307 17:36:50.093682  286989 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0307 17:36:50.093751  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0307 17:36:50.213146  286989 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 17:36:50.213227  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 17:36:50.240208  286989 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0307 17:36:50.240289  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0307 17:36:50.278034  286989 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0307 17:36:50.278106  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0307 17:36:50.409069  286989 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0307 17:36:50.409146  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0307 17:36:50.428511  286989 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0307 17:36:50.428588  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0307 17:36:50.433111  286989 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0307 17:36:50.433199  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0307 17:36:50.567871  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0307 17:36:50.641262  286989 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 17:36:50.641327  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 17:36:50.693036  286989 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0307 17:36:50.693126  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0307 17:36:50.754438  286989 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0307 17:36:50.754503  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0307 17:36:50.766553  286989 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0307 17:36:50.766626  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0307 17:36:50.872754  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0307 17:36:50.987252  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 17:36:51.010786  286989 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 17:36:51.010876  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0307 17:36:51.014928  286989 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0307 17:36:51.015017  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0307 17:36:51.048936  286989 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 17:36:51.049016  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0307 17:36:51.337104  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 17:36:51.342329  286989 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0307 17:36:51.342355  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0307 17:36:51.347337  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 17:36:51.639296  286989 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0307 17:36:51.639328  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0307 17:36:52.099144  286989 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0307 17:36:52.099171  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0307 17:36:52.199152  286989 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.779470718s)
	I0307 17:36:52.199984  286989 node_ready.go:35] waiting up to 6m0s for node "addons-493601" to be "Ready" ...
	I0307 17:36:52.200155  286989 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.780573877s)
	I0307 17:36:52.200176  286989 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0307 17:36:52.210211  286989 node_ready.go:49] node "addons-493601" has status "Ready":"True"
	I0307 17:36:52.210239  286989 node_ready.go:38] duration metric: took 10.228215ms for node "addons-493601" to be "Ready" ...
	I0307 17:36:52.210250  286989 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 17:36:52.232886  286989 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jcdhr" in "kube-system" namespace to be "Ready" ...
	I0307 17:36:52.487904  286989 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0307 17:36:52.487929  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0307 17:36:52.642392  286989 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 17:36:52.642425  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0307 17:36:52.704189  286989 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-493601" context rescaled to 1 replicas
	I0307 17:36:52.736129  286989 pod_ready.go:97] error getting pod "coredns-5dd5756b68-jcdhr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-jcdhr" not found
	I0307 17:36:52.736156  286989 pod_ready.go:81] duration metric: took 503.238726ms for pod "coredns-5dd5756b68-jcdhr" in "kube-system" namespace to be "Ready" ...
	E0307 17:36:52.736168  286989 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-jcdhr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-jcdhr" not found
	I0307 17:36:52.736175  286989 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-s5x4q" in "kube-system" namespace to be "Ready" ...
	I0307 17:36:52.931666  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 17:36:53.396850  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.778782657s)
	I0307 17:36:53.734068  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.981544581s)
	I0307 17:36:53.734173  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.91632486s)
	I0307 17:36:53.734238  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.884205283s)
	I0307 17:36:54.776271  286989 pod_ready.go:102] pod "coredns-5dd5756b68-s5x4q" in "kube-system" namespace has status "Ready":"False"
	I0307 17:36:55.676903  286989 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0307 17:36:55.676987  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:55.709978  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:56.349169  286989 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0307 17:36:56.540457  286989 addons.go:234] Setting addon gcp-auth=true in "addons-493601"
	I0307 17:36:56.540566  286989 host.go:66] Checking if "addons-493601" exists ...
	I0307 17:36:56.541058  286989 cli_runner.go:164] Run: docker container inspect addons-493601 --format={{.State.Status}}
	I0307 17:36:56.570274  286989 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0307 17:36:56.570325  286989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-493601
	I0307 17:36:56.595939  286989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/addons-493601/id_rsa Username:docker}
	I0307 17:36:57.245716  286989 pod_ready.go:102] pod "coredns-5dd5756b68-s5x4q" in "kube-system" namespace has status "Ready":"False"
	I0307 17:36:57.705437  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.75396571s)
	I0307 17:36:57.705531  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.73108437s)
	I0307 17:36:57.705725  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.137778911s)
	I0307 17:36:57.705766  286989 addons.go:470] Verifying addon registry=true in "addons-493601"
	I0307 17:36:57.707832  286989 out.go:177] * Verifying registry addon...
	I0307 17:36:57.705955  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.773912816s)
	I0307 17:36:57.706014  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.83318552s)
	I0307 17:36:57.706076  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.718748837s)
	I0307 17:36:57.706170  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.36903556s)
	I0307 17:36:57.706232  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.358864388s)
	I0307 17:36:57.709854  286989 addons.go:470] Verifying addon ingress=true in "addons-493601"
	I0307 17:36:57.712019  286989 out.go:177] * Verifying ingress addon...
	I0307 17:36:57.710579  286989 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0307 17:36:57.710599  286989 addons.go:470] Verifying addon metrics-server=true in "addons-493601"
	W0307 17:36:57.710648  286989 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 17:36:57.714636  286989 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0307 17:36:57.716000  286989 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-493601 service yakd-dashboard -n yakd-dashboard
	
	I0307 17:36:57.716188  286989 retry.go:31] will retry after 284.860985ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 17:36:57.726033  286989 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0307 17:36:57.726058  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:36:57.733897  286989 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0307 17:36:57.733961  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:36:58.003920  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 17:36:58.220800  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:36:58.233817  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:36:58.736925  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:36:58.739452  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:36:59.224276  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:36:59.229866  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:36:59.271964  286989 pod_ready.go:102] pod "coredns-5dd5756b68-s5x4q" in "kube-system" namespace has status "Ready":"False"
	I0307 17:36:59.370690  286989 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.800391335s)
	I0307 17:36:59.370719  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.438910495s)
	I0307 17:36:59.370811  286989 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-493601"
	I0307 17:36:59.373273  286989 out.go:177] * Verifying csi-hostpath-driver addon...
	I0307 17:36:59.375755  286989 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 17:36:59.378318  286989 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0307 17:36:59.376696  286989 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0307 17:36:59.381037  286989 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0307 17:36:59.381055  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0307 17:36:59.387023  286989 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0307 17:36:59.387054  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:36:59.433793  286989 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0307 17:36:59.433857  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0307 17:36:59.484265  286989 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 17:36:59.484349  286989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0307 17:36:59.505348  286989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 17:36:59.721016  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:36:59.723390  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:36:59.886953  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:00.232707  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:00.233514  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:00.279585  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.275618054s)
	I0307 17:37:00.387042  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:00.739343  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:00.743008  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:00.795991  286989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.290540938s)
	I0307 17:37:00.800311  286989 addons.go:470] Verifying addon gcp-auth=true in "addons-493601"
	I0307 17:37:00.803958  286989 out.go:177] * Verifying gcp-auth addon...
	I0307 17:37:00.807032  286989 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0307 17:37:00.815642  286989 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0307 17:37:00.815691  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:00.889138  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:01.220221  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:01.222129  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:01.311022  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:01.387814  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:01.720573  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:01.722048  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:01.743575  286989 pod_ready.go:102] pod "coredns-5dd5756b68-s5x4q" in "kube-system" namespace has status "Ready":"False"
	I0307 17:37:01.811983  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:01.887627  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:02.220930  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:02.223056  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:02.311352  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:02.387194  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:02.724806  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:02.726845  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:02.811115  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:02.889372  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:03.222187  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:03.222876  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:03.311151  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:03.387215  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:03.723134  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:03.724203  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:03.810698  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:03.886057  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:04.220311  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:04.221708  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:04.242381  286989 pod_ready.go:102] pod "coredns-5dd5756b68-s5x4q" in "kube-system" namespace has status "Ready":"False"
	I0307 17:37:04.310727  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:04.386264  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:04.721946  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:04.722507  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:04.812345  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:04.887170  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:05.222789  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:05.224734  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:05.311406  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:05.387757  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:05.722623  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:05.723278  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:05.811103  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:05.888101  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:06.222758  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:06.223933  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:06.243740  286989 pod_ready.go:102] pod "coredns-5dd5756b68-s5x4q" in "kube-system" namespace has status "Ready":"False"
	I0307 17:37:06.311635  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:06.393121  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:06.721889  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:06.723841  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:06.811853  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:06.887257  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:07.221703  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:07.223192  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:07.311340  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:07.386986  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:07.722844  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:07.723913  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:07.811143  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:07.887634  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:08.221319  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:08.223806  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:08.243987  286989 pod_ready.go:102] pod "coredns-5dd5756b68-s5x4q" in "kube-system" namespace has status "Ready":"False"
	I0307 17:37:08.310722  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:08.388038  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:08.721235  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:08.723563  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:08.811944  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:08.889042  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:09.223715  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:09.229272  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:09.313791  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:09.387253  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:09.721973  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:09.723913  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:09.830224  286989 pod_ready.go:92] pod "coredns-5dd5756b68-s5x4q" in "kube-system" namespace has status "Ready":"True"
	I0307 17:37:09.830259  286989 pod_ready.go:81] duration metric: took 17.094069448s for pod "coredns-5dd5756b68-s5x4q" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:09.830272  286989 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-493601" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:09.830850  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:09.836712  286989 pod_ready.go:92] pod "etcd-addons-493601" in "kube-system" namespace has status "Ready":"True"
	I0307 17:37:09.836738  286989 pod_ready.go:81] duration metric: took 6.458332ms for pod "etcd-addons-493601" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:09.836753  286989 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-493601" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:09.842694  286989 pod_ready.go:92] pod "kube-apiserver-addons-493601" in "kube-system" namespace has status "Ready":"True"
	I0307 17:37:09.842718  286989 pod_ready.go:81] duration metric: took 5.957764ms for pod "kube-apiserver-addons-493601" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:09.842731  286989 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-493601" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:09.849129  286989 pod_ready.go:92] pod "kube-controller-manager-addons-493601" in "kube-system" namespace has status "Ready":"True"
	I0307 17:37:09.849154  286989 pod_ready.go:81] duration metric: took 6.414977ms for pod "kube-controller-manager-addons-493601" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:09.849167  286989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pckpb" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:09.856313  286989 pod_ready.go:92] pod "kube-proxy-pckpb" in "kube-system" namespace has status "Ready":"True"
	I0307 17:37:09.856351  286989 pod_ready.go:81] duration metric: took 7.176139ms for pod "kube-proxy-pckpb" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:09.856363  286989 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-493601" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:09.886909  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:10.140716  286989 pod_ready.go:92] pod "kube-scheduler-addons-493601" in "kube-system" namespace has status "Ready":"True"
	I0307 17:37:10.140741  286989 pod_ready.go:81] duration metric: took 284.370134ms for pod "kube-scheduler-addons-493601" in "kube-system" namespace to be "Ready" ...
	I0307 17:37:10.140751  286989 pod_ready.go:38] duration metric: took 17.930485097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 17:37:10.140775  286989 api_server.go:52] waiting for apiserver process to appear ...
	I0307 17:37:10.140841  286989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 17:37:10.156355  286989 api_server.go:72] duration metric: took 21.565231075s to wait for apiserver process to appear ...
	I0307 17:37:10.156398  286989 api_server.go:88] waiting for apiserver healthz status ...
	I0307 17:37:10.156421  286989 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0307 17:37:10.166139  286989 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0307 17:37:10.167658  286989 api_server.go:141] control plane version: v1.28.4
	I0307 17:37:10.167690  286989 api_server.go:131] duration metric: took 11.284235ms to wait for apiserver health ...
	I0307 17:37:10.167700  286989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 17:37:10.244297  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:10.244893  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:10.314706  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:10.350652  286989 system_pods.go:59] 18 kube-system pods found
	I0307 17:37:10.350745  286989 system_pods.go:61] "coredns-5dd5756b68-s5x4q" [925a6a0e-f442-447f-8e4d-0a5d1f192d39] Running
	I0307 17:37:10.350770  286989 system_pods.go:61] "csi-hostpath-attacher-0" [68b4d831-97ba-4a2a-8209-913605baf445] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 17:37:10.350820  286989 system_pods.go:61] "csi-hostpath-resizer-0" [b9121726-f6a0-449f-ba31-554f37133357] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 17:37:10.350852  286989 system_pods.go:61] "csi-hostpathplugin-hqktz" [cdfb3033-e717-4b58-bf77-62cd7a864ba8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 17:37:10.350883  286989 system_pods.go:61] "etcd-addons-493601" [b287d31a-a9ce-46f7-87ce-5455dc9235b0] Running
	I0307 17:37:10.350925  286989 system_pods.go:61] "kindnet-zxls5" [2dd6b35a-6d1f-4cd5-841e-74f3a5aa00b3] Running
	I0307 17:37:10.350959  286989 system_pods.go:61] "kube-apiserver-addons-493601" [c027fb08-d568-46f8-8f6a-189529641034] Running
	I0307 17:37:10.350977  286989 system_pods.go:61] "kube-controller-manager-addons-493601" [2b8e23d2-8078-4d71-941c-c1e1a4e2207b] Running
	I0307 17:37:10.350999  286989 system_pods.go:61] "kube-ingress-dns-minikube" [41040e7e-5e2e-4d0e-a966-2fecf5aa1690] Running
	I0307 17:37:10.351034  286989 system_pods.go:61] "kube-proxy-pckpb" [95c03287-99e2-4e23-959a-1ebeb4d0d09a] Running
	I0307 17:37:10.351069  286989 system_pods.go:61] "kube-scheduler-addons-493601" [81abc2b0-08e6-4f9e-9cb2-5afecbbf454e] Running
	I0307 17:37:10.351097  286989 system_pods.go:61] "metrics-server-69cf46c98-fcvvl" [d8c7b8dc-17a4-491d-a01d-a51c4bbd7aba] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 17:37:10.351121  286989 system_pods.go:61] "nvidia-device-plugin-daemonset-cwt58" [195aa1f7-53dd-4d65-9e3d-2dea680daefa] Running
	I0307 17:37:10.351151  286989 system_pods.go:61] "registry-proxy-bfgsj" [7b075a61-9dc9-453e-a33f-f03666c06bc1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 17:37:10.351197  286989 system_pods.go:61] "registry-qhxmn" [eb907622-518b-4292-acd8-b940ea762e72] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0307 17:37:10.351223  286989 system_pods.go:61] "snapshot-controller-58dbcc7b99-9qfbh" [86778944-bd62-4c1f-a662-4996597f3677] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 17:37:10.351248  286989 system_pods.go:61] "snapshot-controller-58dbcc7b99-jvjpr" [1c457584-e50a-496e-925e-f74487c6d4df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 17:37:10.351279  286989 system_pods.go:61] "storage-provisioner" [218bd1ae-283a-4846-95b7-7fd99a79ecdf] Running
	I0307 17:37:10.351311  286989 system_pods.go:74] duration metric: took 183.602179ms to wait for pod list to return data ...
	I0307 17:37:10.351334  286989 default_sa.go:34] waiting for default service account to be created ...
	I0307 17:37:10.387043  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:10.540700  286989 default_sa.go:45] found service account: "default"
	I0307 17:37:10.540767  286989 default_sa.go:55] duration metric: took 189.409289ms for default service account to be created ...
	I0307 17:37:10.540792  286989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 17:37:10.720026  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:10.724043  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:10.749627  286989 system_pods.go:86] 18 kube-system pods found
	I0307 17:37:10.749701  286989 system_pods.go:89] "coredns-5dd5756b68-s5x4q" [925a6a0e-f442-447f-8e4d-0a5d1f192d39] Running
	I0307 17:37:10.749726  286989 system_pods.go:89] "csi-hostpath-attacher-0" [68b4d831-97ba-4a2a-8209-913605baf445] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 17:37:10.749755  286989 system_pods.go:89] "csi-hostpath-resizer-0" [b9121726-f6a0-449f-ba31-554f37133357] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 17:37:10.749796  286989 system_pods.go:89] "csi-hostpathplugin-hqktz" [cdfb3033-e717-4b58-bf77-62cd7a864ba8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 17:37:10.749820  286989 system_pods.go:89] "etcd-addons-493601" [b287d31a-a9ce-46f7-87ce-5455dc9235b0] Running
	I0307 17:37:10.749841  286989 system_pods.go:89] "kindnet-zxls5" [2dd6b35a-6d1f-4cd5-841e-74f3a5aa00b3] Running
	I0307 17:37:10.749875  286989 system_pods.go:89] "kube-apiserver-addons-493601" [c027fb08-d568-46f8-8f6a-189529641034] Running
	I0307 17:37:10.749900  286989 system_pods.go:89] "kube-controller-manager-addons-493601" [2b8e23d2-8078-4d71-941c-c1e1a4e2207b] Running
	I0307 17:37:10.749921  286989 system_pods.go:89] "kube-ingress-dns-minikube" [41040e7e-5e2e-4d0e-a966-2fecf5aa1690] Running
	I0307 17:37:10.749943  286989 system_pods.go:89] "kube-proxy-pckpb" [95c03287-99e2-4e23-959a-1ebeb4d0d09a] Running
	I0307 17:37:10.749976  286989 system_pods.go:89] "kube-scheduler-addons-493601" [81abc2b0-08e6-4f9e-9cb2-5afecbbf454e] Running
	I0307 17:37:10.749999  286989 system_pods.go:89] "metrics-server-69cf46c98-fcvvl" [d8c7b8dc-17a4-491d-a01d-a51c4bbd7aba] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 17:37:10.750022  286989 system_pods.go:89] "nvidia-device-plugin-daemonset-cwt58" [195aa1f7-53dd-4d65-9e3d-2dea680daefa] Running
	I0307 17:37:10.750045  286989 system_pods.go:89] "registry-proxy-bfgsj" [7b075a61-9dc9-453e-a33f-f03666c06bc1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 17:37:10.750080  286989 system_pods.go:89] "registry-qhxmn" [eb907622-518b-4292-acd8-b940ea762e72] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0307 17:37:10.750107  286989 system_pods.go:89] "snapshot-controller-58dbcc7b99-9qfbh" [86778944-bd62-4c1f-a662-4996597f3677] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 17:37:10.750130  286989 system_pods.go:89] "snapshot-controller-58dbcc7b99-jvjpr" [1c457584-e50a-496e-925e-f74487c6d4df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 17:37:10.750150  286989 system_pods.go:89] "storage-provisioner" [218bd1ae-283a-4846-95b7-7fd99a79ecdf] Running
	I0307 17:37:10.750186  286989 system_pods.go:126] duration metric: took 209.375547ms to wait for k8s-apps to be running ...
	I0307 17:37:10.750212  286989 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 17:37:10.750303  286989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 17:37:10.764504  286989 system_svc.go:56] duration metric: took 14.282675ms WaitForService to wait for kubelet
	I0307 17:37:10.764531  286989 kubeadm.go:576] duration metric: took 22.173414194s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 17:37:10.764552  286989 node_conditions.go:102] verifying NodePressure condition ...
	I0307 17:37:10.812028  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:10.886839  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:10.941474  286989 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0307 17:37:10.941586  286989 node_conditions.go:123] node cpu capacity is 2
	I0307 17:37:10.941624  286989 node_conditions.go:105] duration metric: took 177.066128ms to run NodePressure ...
	I0307 17:37:10.941671  286989 start.go:240] waiting for startup goroutines ...
	I0307 17:37:11.222924  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:11.229128  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:11.313313  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:11.386552  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:11.720743  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:11.721457  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:11.811781  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:11.887544  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:12.220669  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:12.221460  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:12.311567  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:12.388088  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:12.720922  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:12.721799  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:12.811356  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:12.887205  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:13.222384  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:13.223225  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:13.311255  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:13.386997  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:13.720094  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:13.721357  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:13.811059  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:13.887118  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:14.219967  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:14.220598  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:14.310550  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:14.387499  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:14.721093  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:14.723761  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:14.811398  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:14.886988  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:15.223293  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:15.224542  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:15.311658  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:15.388548  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:15.734438  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:15.735611  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:15.811774  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:15.887590  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:16.222397  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:16.223582  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:16.311451  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:16.389260  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:16.726638  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:16.727616  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:16.811604  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:16.887819  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:17.223335  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:17.224506  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:17.311466  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:17.387340  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:17.722131  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:17.723270  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:17.811788  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:17.887466  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:18.221775  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:18.223753  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:18.311865  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:18.387018  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:18.721035  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 17:37:18.724025  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:18.811808  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:18.887104  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:19.226664  286989 kapi.go:107] duration metric: took 21.516083571s to wait for kubernetes.io/minikube-addons=registry ...
	I0307 17:37:19.227627  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:19.311124  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:19.386825  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:19.721132  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:19.811508  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:19.887521  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:20.222396  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:20.311437  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:20.386683  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:20.721360  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:20.811371  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:20.887504  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:21.221025  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:21.310812  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:21.386956  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:21.721218  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:21.811162  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:21.887131  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:22.220891  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:22.311939  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:22.389016  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:22.721383  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:22.811361  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:22.886550  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:23.222613  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:23.311487  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:23.388216  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:23.725110  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:23.811505  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:23.888080  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:24.221145  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:24.311464  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:24.387094  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:24.720258  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:24.811673  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:24.887396  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:25.221654  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:25.311088  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:25.387539  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:25.766183  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:25.829584  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:25.888541  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:26.221621  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:26.311326  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:26.387402  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:26.721740  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:26.811760  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:26.887052  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:27.221178  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:27.311238  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:27.386947  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:27.721218  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:27.811108  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:27.887610  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:28.221547  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:28.311462  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:28.387417  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:28.720967  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:28.811589  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:28.887082  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:29.221005  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:29.311182  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:29.389477  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:29.720967  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:29.811577  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:29.886605  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:30.225643  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:30.317110  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:30.386830  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:30.720823  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:30.811581  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:30.886455  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:31.221323  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:31.310723  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:31.387619  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:31.723664  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:31.811350  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:31.893798  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:32.221101  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:32.310969  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:32.386753  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:32.721327  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:32.811082  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:32.888479  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:33.221324  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:33.310672  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:33.387839  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:33.720356  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:33.811361  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:33.888839  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:34.220571  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:34.311674  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:34.388086  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:34.721171  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:34.816728  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:34.886799  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:35.221159  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:35.310619  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:35.387289  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:35.733103  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:35.810695  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:35.886964  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:36.221130  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:36.312141  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:36.387648  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:36.720415  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:36.811144  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:36.887481  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:37.221712  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:37.311336  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:37.387431  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:37.723738  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:37.811842  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:37.886557  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:38.221827  286989 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 17:37:38.319469  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:38.387993  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:38.724127  286989 kapi.go:107] duration metric: took 41.009487728s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0307 17:37:38.812274  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:38.888724  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:39.311436  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:39.387453  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:39.810807  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:39.890225  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:40.311438  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:40.404283  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:40.811130  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:40.886991  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:41.310757  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:41.388237  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:41.812466  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:41.887398  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:42.311877  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:42.387118  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:42.810677  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:42.886789  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:43.310806  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:43.389685  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:43.811364  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:43.887317  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:44.311132  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:44.387032  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:44.812516  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:44.886807  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 17:37:45.315990  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:45.386735  286989 kapi.go:107] duration metric: took 46.010038773s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0307 17:37:45.811587  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:46.310622  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:46.810605  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:47.310468  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:47.811464  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:48.311400  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:48.811364  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:49.311025  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:49.810507  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:50.311351  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:50.812053  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:51.313202  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:51.811176  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:52.311131  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:52.810937  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:53.310413  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:53.811420  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:54.311474  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:54.811552  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:55.310350  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:55.811245  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:56.311035  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:56.810913  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:57.310479  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:57.811524  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:58.310442  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:58.811545  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:59.310863  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:37:59.810579  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:00.312147  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:00.811114  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:01.311064  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:01.810968  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:02.311206  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:02.811208  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:03.311304  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:03.811158  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:04.311575  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:04.810673  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:05.310686  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:05.811611  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:06.310988  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:06.811549  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:07.311362  286989 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 17:38:07.811571  286989 kapi.go:107] duration metric: took 1m7.004536129s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0307 17:38:07.813638  286989 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-493601 cluster.
	I0307 17:38:07.815823  286989 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0307 17:38:07.817939  286989 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0307 17:38:07.820119  286989 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, default-storageclass, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0307 17:38:07.822091  286989 addons.go:505] duration metric: took 1m19.230670208s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin default-storageclass storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0307 17:38:07.822148  286989 start.go:245] waiting for cluster config update ...
	I0307 17:38:07.822172  286989 start.go:254] writing updated cluster config ...
	I0307 17:38:07.822487  286989 ssh_runner.go:195] Run: rm -f paused
	I0307 17:38:08.188068  286989 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0307 17:38:08.194494  286989 out.go:177] * Done! kubectl is now configured to use "addons-493601" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f970ffab5af4b       fc9db2894f4e4       1 second ago         Exited              helper-pod                0                   cdc5e2f780280       helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e
	d1a53c903f28a       46bd05c4a04f3       5 seconds ago        Exited              busybox                   0                   c6b304ecf48d3       test-local-path
	b685f731edfbf       dd1b12fcb6097       9 seconds ago        Exited              hello-world-app           2                   5ac3f7f96d48b       hello-world-app-5d77478584-cxdk7
	ab341220682ca       fc9db2894f4e4       10 seconds ago       Exited              helper-pod                0                   6f4a587de4894       helper-pod-create-pvc-d88010ac-e556-4434-84f7-887264f6234e
	454e48bdd55b7       be5e6f23a9904       33 seconds ago       Running             nginx                     0                   3c4ff800bcacd       nginx
	d9eca12781326       bafe72500920c       About a minute ago   Running             gcp-auth                  0                   0d279da1a90d6       gcp-auth-5f6b4f85fd-6mbj2
	18096d3a5c37d       1a024e390dd05       About a minute ago   Exited              patch                     0                   99dd374f03928       ingress-nginx-admission-patch-5qdsc
	541058129491b       1a024e390dd05       About a minute ago   Exited              create                    0                   f84b7ceb338a6       ingress-nginx-admission-create-qndkz
	360ee4d4c40ed       7ce2150c8929b       About a minute ago   Running             local-path-provisioner    0                   4122311fee336       local-path-provisioner-78b46b4d5c-hrgwh
	62ce6e3f78882       20e3f2db01e81       About a minute ago   Running             yakd                      0                   18b08ce73f5f1       yakd-dashboard-9947fc6bf-2nlg8
	d3ca1c4fa7574       41340d5d57adb       2 minutes ago        Running             cloud-spanner-emulator    0                   0761837b580b8       cloud-spanner-emulator-6548d5df46-zm4n5
	01adf01b6c14b       97e04611ad434       2 minutes ago        Running             coredns                   0                   4810a88254c62       coredns-5dd5756b68-s5x4q
	c273eef1d47ac       ba04bb24b9575       2 minutes ago        Running             storage-provisioner       0                   7c50470bc7836       storage-provisioner
	d034d2c6a4372       4740c1948d3fc       2 minutes ago        Running             kindnet-cni               0                   a398a2530fae8       kindnet-zxls5
	3b70593e2c3a8       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                0                   7c69f8441e0fc       kube-proxy-pckpb
	dc610faaa9e0d       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver            0                   58a52d3246669       kube-apiserver-addons-493601
	45652b6b2da60       05c284c929889       2 minutes ago        Running             kube-scheduler            0                   0f29afc906f58       kube-scheduler-addons-493601
	79fbda64ef744       9961cbceaf234       2 minutes ago        Running             kube-controller-manager   0                   f3aa38208e87e       kube-controller-manager-addons-493601
	5295bd103191c       9cdd6470f48c8       2 minutes ago        Running             etcd                      0                   ef01419aa36fc       etcd-addons-493601
	
	
	==> containerd <==
	Mar 07 17:39:12 addons-493601 containerd[758]: time="2024-03-07T17:39:12.815059646Z" level=info msg="cleaning up dead shim"
	Mar 07 17:39:12 addons-493601 containerd[758]: time="2024-03-07T17:39:12.823886889Z" level=warning msg="cleanup warnings time=\"2024-03-07T17:39:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9735 runtime=io.containerd.runc.v2\n"
	Mar 07 17:39:12 addons-493601 containerd[758]: time="2024-03-07T17:39:12.842641253Z" level=info msg="TearDown network for sandbox \"c6b304ecf48d30a49ca34c6aba8c12cc2521951c252996452ae3c09d629558dd\" successfully"
	Mar 07 17:39:12 addons-493601 containerd[758]: time="2024-03-07T17:39:12.842863997Z" level=info msg="StopPodSandbox for \"c6b304ecf48d30a49ca34c6aba8c12cc2521951c252996452ae3c09d629558dd\" returns successfully"
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.716602024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e,Uid:952f58d4-9d8e-4a5d-a178-33d06efde090,Namespace:local-path-storage,Attempt:0,}"
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.756222982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.756357423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.756397849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.756628027Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cdc5e2f780280681bcc538e7372a8ca1cdd6dc360b5dd588a65faa6906c9c68c pid=9819 runtime=io.containerd.runc.v2
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.828163246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e,Uid:952f58d4-9d8e-4a5d-a178-33d06efde090,Namespace:local-path-storage,Attempt:0,} returns sandbox id \"cdc5e2f780280681bcc538e7372a8ca1cdd6dc360b5dd588a65faa6906c9c68c\""
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.836179514Z" level=info msg="CreateContainer within sandbox \"cdc5e2f780280681bcc538e7372a8ca1cdd6dc360b5dd588a65faa6906c9c68c\" for container &ContainerMetadata{Name:helper-pod,Attempt:0,}"
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.853459347Z" level=info msg="CreateContainer within sandbox \"cdc5e2f780280681bcc538e7372a8ca1cdd6dc360b5dd588a65faa6906c9c68c\" for &ContainerMetadata{Name:helper-pod,Attempt:0,} returns container id \"f970ffab5af4b7cf36e98dacc64dfe77074e9e46d7c9a76320f49ced0281a81f\""
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.862469638Z" level=info msg="StartContainer for \"f970ffab5af4b7cf36e98dacc64dfe77074e9e46d7c9a76320f49ced0281a81f\""
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.948980075Z" level=info msg="StartContainer for \"f970ffab5af4b7cf36e98dacc64dfe77074e9e46d7c9a76320f49ced0281a81f\" returns successfully"
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.979936579Z" level=info msg="shim disconnected" id=f970ffab5af4b7cf36e98dacc64dfe77074e9e46d7c9a76320f49ced0281a81f
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.980146335Z" level=warning msg="cleaning up after shim disconnected" id=f970ffab5af4b7cf36e98dacc64dfe77074e9e46d7c9a76320f49ced0281a81f namespace=k8s.io
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.980241916Z" level=info msg="cleaning up dead shim"
	Mar 07 17:39:14 addons-493601 containerd[758]: time="2024-03-07T17:39:14.988356628Z" level=warning msg="cleanup warnings time=\"2024-03-07T17:39:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9902 runtime=io.containerd.runc.v2\n"
	Mar 07 17:39:15 addons-493601 containerd[758]: time="2024-03-07T17:39:15.164077333Z" level=info msg="StopContainer for \"360ee4d4c40ed69a2d5ce2080964a8b2e0883ab2127a741e8b0e3804cbd5e63d\" with timeout 30 (s)"
	Mar 07 17:39:15 addons-493601 containerd[758]: time="2024-03-07T17:39:15.164510267Z" level=info msg="Stop container \"360ee4d4c40ed69a2d5ce2080964a8b2e0883ab2127a741e8b0e3804cbd5e63d\" with signal terminated"
	Mar 07 17:39:16 addons-493601 containerd[758]: time="2024-03-07T17:39:16.796418083Z" level=info msg="StopPodSandbox for \"cdc5e2f780280681bcc538e7372a8ca1cdd6dc360b5dd588a65faa6906c9c68c\""
	Mar 07 17:39:16 addons-493601 containerd[758]: time="2024-03-07T17:39:16.796504638Z" level=info msg="Container to stop \"f970ffab5af4b7cf36e98dacc64dfe77074e9e46d7c9a76320f49ced0281a81f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 07 17:39:16 addons-493601 containerd[758]: time="2024-03-07T17:39:16.841229144Z" level=info msg="shim disconnected" id=cdc5e2f780280681bcc538e7372a8ca1cdd6dc360b5dd588a65faa6906c9c68c
	Mar 07 17:39:16 addons-493601 containerd[758]: time="2024-03-07T17:39:16.841289517Z" level=warning msg="cleaning up after shim disconnected" id=cdc5e2f780280681bcc538e7372a8ca1cdd6dc360b5dd588a65faa6906c9c68c namespace=k8s.io
	Mar 07 17:39:16 addons-493601 containerd[758]: time="2024-03-07T17:39:16.841300701Z" level=info msg="cleaning up dead shim"
	
	
	==> coredns [01adf01b6c14bbd8472295d06482d1f355f291b546b220323016912a67bba60b] <==
	[INFO] 10.244.0.19:34237 - 19212 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000280943s
	[INFO] 10.244.0.19:34237 - 44735 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001032276s
	[INFO] 10.244.0.19:53841 - 53905 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002460652s
	[INFO] 10.244.0.19:53841 - 31130 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001996555s
	[INFO] 10.244.0.19:34237 - 35826 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00143307s
	[INFO] 10.244.0.19:53841 - 4307 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000113017s
	[INFO] 10.244.0.19:34237 - 19330 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037342s
	[INFO] 10.244.0.19:51236 - 11477 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000050814s
	[INFO] 10.244.0.19:43227 - 16943 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000207991s
	[INFO] 10.244.0.19:51236 - 45152 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057903s
	[INFO] 10.244.0.19:43227 - 3844 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00003625s
	[INFO] 10.244.0.19:51236 - 49107 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062244s
	[INFO] 10.244.0.19:43227 - 49296 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038302s
	[INFO] 10.244.0.19:51236 - 43455 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050043s
	[INFO] 10.244.0.19:43227 - 9131 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031639s
	[INFO] 10.244.0.19:43227 - 24476 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056394s
	[INFO] 10.244.0.19:51236 - 37737 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031163s
	[INFO] 10.244.0.19:43227 - 61189 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029547s
	[INFO] 10.244.0.19:51236 - 29426 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000843s
	[INFO] 10.244.0.19:43227 - 7823 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001601767s
	[INFO] 10.244.0.19:51236 - 3103 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001029002s
	[INFO] 10.244.0.19:43227 - 5906 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001637033s
	[INFO] 10.244.0.19:51236 - 25131 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001246626s
	[INFO] 10.244.0.19:43227 - 56005 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000307059s
	[INFO] 10.244.0.19:51236 - 27265 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069612s
	
	
	==> describe nodes <==
	Name:               addons-493601
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-493601
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f
	                    minikube.k8s.io/name=addons-493601
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T17_36_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-493601
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 17:36:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-493601
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 17:39:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 17:39:09 +0000   Thu, 07 Mar 2024 17:36:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 17:39:09 +0000   Thu, 07 Mar 2024 17:36:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 17:39:09 +0000   Thu, 07 Mar 2024 17:36:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 17:39:09 +0000   Thu, 07 Mar 2024 17:36:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-493601
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 427a27318e38408096a9a4d0f6c000f9
	  System UUID:                4acc3467-7c5f-4cb6-b173-06a3e1c4733c
	  Boot ID:                    a949ea88-4a69-4ab0-89c5-986450203265
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-zm4n5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  default                     hello-world-app-5d77478584-cxdk7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-5f6b4f85fd-6mbj2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 coredns-5dd5756b68-s5x4q                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m28s
	  kube-system                 etcd-addons-493601                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m41s
	  kube-system                 kindnet-zxls5                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m29s
	  kube-system                 kube-apiserver-addons-493601               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 kube-controller-manager-addons-493601      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 kube-proxy-pckpb                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 kube-scheduler-addons-493601               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  local-path-storage          local-path-provisioner-78b46b4d5c-hrgwh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-2nlg8             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m26s  kube-proxy       
	  Normal  Starting                 2m41s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m41s  kubelet          Node addons-493601 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m41s  kubelet          Node addons-493601 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s  kubelet          Node addons-493601 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m41s  kubelet          Node addons-493601 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m41s  kubelet          Node addons-493601 status is now: NodeReady
	  Normal  RegisteredNode           2m29s  node-controller  Node addons-493601 event: Registered Node addons-493601 in Controller
	
	
	==> dmesg <==
	[  +0.000985] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=00000000f8b48ded
	[  +0.001119] FS-Cache: N-key=[8] '8f385c0100000000'
	[  +0.002710] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001018] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=00000000edaa5356
	[  +0.001057] FS-Cache: O-key=[8] '8f385c0100000000'
	[  +0.000805] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=00000000929d8fe2
	[  +0.001116] FS-Cache: N-key=[8] '8f385c0100000000'
	[  +2.734723] FS-Cache: Duplicate cookie detected
	[  +0.000730] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000969] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=000000003a76bb30
	[  +0.001149] FS-Cache: O-key=[8] '8e385c0100000000'
	[  +0.000737] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000981] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=00000000f8b48ded
	[  +0.001091] FS-Cache: N-key=[8] '8e385c0100000000'
	[  +0.414501] FS-Cache: Duplicate cookie detected
	[  +0.000729] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001028] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=000000002d1b8011
	[  +0.001201] FS-Cache: O-key=[8] '99385c0100000000'
	[  +0.000797] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.001029] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=00000000823aec36
	[  +0.001346] FS-Cache: N-key=[8] '99385c0100000000'
	[Mar 7 17:03] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Mar 7 17:24] hrtimer: interrupt took 41013117 ns
	
	
	==> etcd [5295bd103191c8f0ff3b2162325a256ca7cf25a48007524fe05f8bba089cf028] <==
	{"level":"info","ts":"2024-03-07T17:36:29.48606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-07T17:36:29.486148Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-07T17:36:29.48707Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-07T17:36:29.487149Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-07T17:36:29.487173Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-07T17:36:29.487245Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-07T17:36:29.487271Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-07T17:36:29.769565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-07T17:36:29.769788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-07T17:36:29.769889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-07T17:36:29.769987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-07T17:36:29.770069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-07T17:36:29.770154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-07T17:36:29.770238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-07T17:36:29.771876Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-493601 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T17:36:29.772123Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T17:36:29.772228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T17:36:29.781864Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-07T17:36:29.781998Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T17:36:29.792168Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T17:36:29.792218Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-07T17:36:29.792301Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T17:36:29.792383Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T17:36:29.79241Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T17:36:29.798433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [d9eca12781326b7fd70b78f110de69c1e3a74e8e047ceaa21b7de264e576011e] <==
	2024/03/07 17:38:07 GCP Auth Webhook started!
	2024/03/07 17:38:10 Ready to marshal response ...
	2024/03/07 17:38:10 Ready to write response ...
	2024/03/07 17:38:18 Ready to marshal response ...
	2024/03/07 17:38:18 Ready to write response ...
	2024/03/07 17:38:36 Ready to marshal response ...
	2024/03/07 17:38:36 Ready to write response ...
	2024/03/07 17:38:41 Ready to marshal response ...
	2024/03/07 17:38:41 Ready to write response ...
	2024/03/07 17:38:50 Ready to marshal response ...
	2024/03/07 17:38:50 Ready to write response ...
	2024/03/07 17:39:04 Ready to marshal response ...
	2024/03/07 17:39:04 Ready to write response ...
	2024/03/07 17:39:04 Ready to marshal response ...
	2024/03/07 17:39:04 Ready to write response ...
	2024/03/07 17:39:14 Ready to marshal response ...
	2024/03/07 17:39:14 Ready to write response ...
	
	
	==> kernel <==
	 17:39:17 up  1:21,  0 users,  load average: 1.97, 2.60, 2.66
	Linux addons-493601 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [d034d2c6a4372266ec30ad878e202ab14def718f534063d4a68dab63160b11ae] <==
	I0307 17:37:12.657063       1 main.go:227] handling current node
	I0307 17:37:22.669912       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:37:22.669939       1 main.go:227] handling current node
	I0307 17:37:32.694934       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:37:32.700389       1 main.go:227] handling current node
	I0307 17:37:42.704890       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:37:42.704928       1 main.go:227] handling current node
	I0307 17:37:52.716973       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:37:52.717003       1 main.go:227] handling current node
	I0307 17:38:02.721620       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:38:02.721643       1 main.go:227] handling current node
	I0307 17:38:12.733595       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:38:12.733622       1 main.go:227] handling current node
	I0307 17:38:22.738353       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:38:22.738380       1 main.go:227] handling current node
	I0307 17:38:32.749568       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:38:32.749669       1 main.go:227] handling current node
	I0307 17:38:42.760106       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:38:42.760132       1 main.go:227] handling current node
	I0307 17:38:52.830058       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:38:52.830086       1 main.go:227] handling current node
	I0307 17:39:02.834982       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:39:02.835016       1 main.go:227] handling current node
	I0307 17:39:12.838789       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 17:39:12.838818       1 main.go:227] handling current node
	
	
	==> kube-apiserver [dc610faaa9e0de422b73f61929be58d797ea4cf979665a9bfe09dbf7aa4688c3] <==
	I0307 17:38:35.347485       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0307 17:38:36.368096       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0307 17:38:40.892584       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0307 17:38:41.244510       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.136.221"}
	I0307 17:38:50.913947       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.69.80"}
	I0307 17:38:52.477793       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:38:52.477837       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:38:52.503881       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:38:52.503937       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:38:52.522991       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:38:52.523054       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:38:52.547656       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:38:52.547699       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:38:52.562806       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:38:52.562855       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:38:52.586930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:38:52.586991       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0307 17:38:52.594195       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0307 17:38:52.594408       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0307 17:38:53.548673       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0307 17:38:53.595089       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0307 17:38:53.617900       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0307 17:39:15.422187       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0307 17:39:15.425666       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0307 17:39:15.429341       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [79fbda64ef74428e0ce28e5029cb783bd5a9d65d831b051e0d1599dbf415fed7] <==
	W0307 17:38:57.351049       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:38:57.351085       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:38:57.830376       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:38:57.830408       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:39:02.822418       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:39:02.822454       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:39:02.869068       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:39:02.869106       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:39:03.358489       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:39:03.358523       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 17:39:04.559403       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0307 17:39:04.785701       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0307 17:39:07.798401       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="81.452µs"
	I0307 17:39:08.556627       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0307 17:39:08.567792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="5.44µs"
	I0307 17:39:08.567833       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0307 17:39:09.556303       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:39:09.556336       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:39:10.896978       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:39:10.897009       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:39:11.027275       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:39:11.027322       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 17:39:15.076658       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 17:39:15.076692       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 17:39:15.152368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="4.972µs"
	
	
	==> kube-proxy [3b70593e2c3a86f51ec2300a96874b24c7ded82793485e60aabff1a6cd4c3e42] <==
	I0307 17:36:50.551361       1 server_others.go:69] "Using iptables proxy"
	I0307 17:36:50.574115       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0307 17:36:50.604981       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0307 17:36:50.620698       1 server_others.go:152] "Using iptables Proxier"
	I0307 17:36:50.620741       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0307 17:36:50.620749       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0307 17:36:50.620773       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 17:36:50.621027       1 server.go:846] "Version info" version="v1.28.4"
	I0307 17:36:50.621038       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 17:36:50.651368       1 config.go:188] "Starting service config controller"
	I0307 17:36:50.651400       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 17:36:50.651426       1 config.go:97] "Starting endpoint slice config controller"
	I0307 17:36:50.651430       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 17:36:50.656020       1 config.go:315] "Starting node config controller"
	I0307 17:36:50.656051       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 17:36:50.752399       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0307 17:36:50.752455       1 shared_informer.go:318] Caches are synced for service config
	I0307 17:36:50.756096       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [45652b6b2da605776a860639783fbb77bcda5b9a709000523203a36627c383bd] <==
	W0307 17:36:33.355306       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 17:36:33.355358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 17:36:33.355435       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0307 17:36:33.355483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0307 17:36:33.355575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 17:36:33.355618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 17:36:33.355700       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 17:36:33.355788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0307 17:36:33.369751       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0307 17:36:33.369863       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 17:36:34.175310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 17:36:34.175600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 17:36:34.194585       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 17:36:34.194862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0307 17:36:34.244568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0307 17:36:34.245001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0307 17:36:34.289848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0307 17:36:34.290111       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0307 17:36:34.365406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 17:36:34.365710       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 17:36:34.371733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 17:36:34.372322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 17:36:34.516013       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 17:36:34.516252       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0307 17:36:34.927363       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 07 17:39:13 addons-493601 kubelet[1485]: I0307 17:39:13.064061    1485 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3d5b9599-e71b-4690-afb5-379a12c8d4ac-gcp-creds\") on node \"addons-493601\" DevicePath \"\""
	Mar 07 17:39:13 addons-493601 kubelet[1485]: I0307 17:39:13.064082    1485 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vcvm6\" (UniqueName: \"kubernetes.io/projected/3d5b9599-e71b-4690-afb5-379a12c8d4ac-kube-api-access-vcvm6\") on node \"addons-493601\" DevicePath \"\""
	Mar 07 17:39:13 addons-493601 kubelet[1485]: I0307 17:39:13.787652    1485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6b304ecf48d30a49ca34c6aba8c12cc2521951c252996452ae3c09d629558dd"
	Mar 07 17:39:14 addons-493601 kubelet[1485]: I0307 17:39:14.336722    1485 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3d5b9599-e71b-4690-afb5-379a12c8d4ac" path="/var/lib/kubelet/pods/3d5b9599-e71b-4690-afb5-379a12c8d4ac/volumes"
	Mar 07 17:39:14 addons-493601 kubelet[1485]: I0307 17:39:14.411308    1485 topology_manager.go:215] "Topology Admit Handler" podUID="952f58d4-9d8e-4a5d-a178-33d06efde090" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e"
	Mar 07 17:39:14 addons-493601 kubelet[1485]: E0307 17:39:14.411565    1485 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41040e7e-5e2e-4d0e-a966-2fecf5aa1690" containerName="minikube-ingress-dns"
	Mar 07 17:39:14 addons-493601 kubelet[1485]: E0307 17:39:14.411663    1485 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5b9599-e71b-4690-afb5-379a12c8d4ac" containerName="busybox"
	Mar 07 17:39:14 addons-493601 kubelet[1485]: I0307 17:39:14.411764    1485 memory_manager.go:346] "RemoveStaleState removing state" podUID="41040e7e-5e2e-4d0e-a966-2fecf5aa1690" containerName="minikube-ingress-dns"
	Mar 07 17:39:14 addons-493601 kubelet[1485]: I0307 17:39:14.411843    1485 memory_manager.go:346] "RemoveStaleState removing state" podUID="3d5b9599-e71b-4690-afb5-379a12c8d4ac" containerName="busybox"
	Mar 07 17:39:14 addons-493601 kubelet[1485]: I0307 17:39:14.471455    1485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/952f58d4-9d8e-4a5d-a178-33d06efde090-gcp-creds\") pod \"helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e\" (UID: \"952f58d4-9d8e-4a5d-a178-33d06efde090\") " pod="local-path-storage/helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e"
	Mar 07 17:39:14 addons-493601 kubelet[1485]: I0307 17:39:14.471700    1485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/952f58d4-9d8e-4a5d-a178-33d06efde090-script\") pod \"helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e\" (UID: \"952f58d4-9d8e-4a5d-a178-33d06efde090\") " pod="local-path-storage/helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e"
	Mar 07 17:39:14 addons-493601 kubelet[1485]: I0307 17:39:14.471797    1485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/952f58d4-9d8e-4a5d-a178-33d06efde090-data\") pod \"helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e\" (UID: \"952f58d4-9d8e-4a5d-a178-33d06efde090\") " pod="local-path-storage/helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e"
	Mar 07 17:39:14 addons-493601 kubelet[1485]: I0307 17:39:14.471833    1485 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpv8\" (UniqueName: \"kubernetes.io/projected/952f58d4-9d8e-4a5d-a178-33d06efde090-kube-api-access-9cpv8\") pod \"helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e\" (UID: \"952f58d4-9d8e-4a5d-a178-33d06efde090\") " pod="local-path-storage/helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e"
	Mar 07 17:39:16 addons-493601 kubelet[1485]: I0307 17:39:16.992692    1485 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/952f58d4-9d8e-4a5d-a178-33d06efde090-gcp-creds\") pod \"952f58d4-9d8e-4a5d-a178-33d06efde090\" (UID: \"952f58d4-9d8e-4a5d-a178-33d06efde090\") "
	Mar 07 17:39:16 addons-493601 kubelet[1485]: I0307 17:39:16.992748    1485 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/952f58d4-9d8e-4a5d-a178-33d06efde090-script\") pod \"952f58d4-9d8e-4a5d-a178-33d06efde090\" (UID: \"952f58d4-9d8e-4a5d-a178-33d06efde090\") "
	Mar 07 17:39:16 addons-493601 kubelet[1485]: I0307 17:39:16.992815    1485 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/952f58d4-9d8e-4a5d-a178-33d06efde090-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "952f58d4-9d8e-4a5d-a178-33d06efde090" (UID: "952f58d4-9d8e-4a5d-a178-33d06efde090"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 07 17:39:16 addons-493601 kubelet[1485]: I0307 17:39:16.992866    1485 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/952f58d4-9d8e-4a5d-a178-33d06efde090-data\") pod \"952f58d4-9d8e-4a5d-a178-33d06efde090\" (UID: \"952f58d4-9d8e-4a5d-a178-33d06efde090\") "
	Mar 07 17:39:16 addons-493601 kubelet[1485]: I0307 17:39:16.992904    1485 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cpv8\" (UniqueName: \"kubernetes.io/projected/952f58d4-9d8e-4a5d-a178-33d06efde090-kube-api-access-9cpv8\") pod \"952f58d4-9d8e-4a5d-a178-33d06efde090\" (UID: \"952f58d4-9d8e-4a5d-a178-33d06efde090\") "
	Mar 07 17:39:16 addons-493601 kubelet[1485]: I0307 17:39:16.993057    1485 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/952f58d4-9d8e-4a5d-a178-33d06efde090-gcp-creds\") on node \"addons-493601\" DevicePath \"\""
	Mar 07 17:39:16 addons-493601 kubelet[1485]: I0307 17:39:16.993287    1485 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/952f58d4-9d8e-4a5d-a178-33d06efde090-data" (OuterVolumeSpecName: "data") pod "952f58d4-9d8e-4a5d-a178-33d06efde090" (UID: "952f58d4-9d8e-4a5d-a178-33d06efde090"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 07 17:39:16 addons-493601 kubelet[1485]: I0307 17:39:16.995380    1485 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952f58d4-9d8e-4a5d-a178-33d06efde090-kube-api-access-9cpv8" (OuterVolumeSpecName: "kube-api-access-9cpv8") pod "952f58d4-9d8e-4a5d-a178-33d06efde090" (UID: "952f58d4-9d8e-4a5d-a178-33d06efde090"). InnerVolumeSpecName "kube-api-access-9cpv8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 17:39:16 addons-493601 kubelet[1485]: I0307 17:39:16.997852    1485 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/952f58d4-9d8e-4a5d-a178-33d06efde090-script" (OuterVolumeSpecName: "script") pod "952f58d4-9d8e-4a5d-a178-33d06efde090" (UID: "952f58d4-9d8e-4a5d-a178-33d06efde090"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 07 17:39:17 addons-493601 kubelet[1485]: I0307 17:39:17.093470    1485 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/952f58d4-9d8e-4a5d-a178-33d06efde090-script\") on node \"addons-493601\" DevicePath \"\""
	Mar 07 17:39:17 addons-493601 kubelet[1485]: I0307 17:39:17.093504    1485 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/952f58d4-9d8e-4a5d-a178-33d06efde090-data\") on node \"addons-493601\" DevicePath \"\""
	Mar 07 17:39:17 addons-493601 kubelet[1485]: I0307 17:39:17.093578    1485 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9cpv8\" (UniqueName: \"kubernetes.io/projected/952f58d4-9d8e-4a5d-a178-33d06efde090-kube-api-access-9cpv8\") on node \"addons-493601\" DevicePath \"\""
	
	
	==> storage-provisioner [c273eef1d47ac574fe6b6c3dd8abc426d2539cddfc6cfb82d6df6c15aceea2f9] <==
	I0307 17:36:56.632257       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 17:36:56.657724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 17:36:56.657779       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 17:36:56.673616       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 17:36:56.675089       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-493601_b5ee7003-bd3b-420f-9487-9b134fbaf1a6!
	I0307 17:36:56.675718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e7bc1c7-7367-4de5-afe5-14c044d3482c", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-493601_b5ee7003-bd3b-420f-9487-9b134fbaf1a6 became leader
	I0307 17:36:56.776445       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-493601_b5ee7003-bd3b-420f-9487-9b134fbaf1a6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-493601 -n addons-493601
helpers_test.go:261: (dbg) Run:  kubectl --context addons-493601 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-493601 describe pod helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-493601 describe pod helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e: exit status 1 (96.338145ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-493601 describe pod helper-pod-delete-pvc-d88010ac-e556-4434-84f7-887264f6234e: exit status 1
--- FAIL: TestAddons/parallel/Ingress (37.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image load --daemon gcr.io/google-containers/addon-resizer:functional-529713 --alsologtostderr
2024/03/07 17:44:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 image load --daemon gcr.io/google-containers/addon-resizer:functional-529713 --alsologtostderr: (3.916295194s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-529713" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image load --daemon gcr.io/google-containers/addon-resizer:functional-529713 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 image load --daemon gcr.io/google-containers/addon-resizer:functional-529713 --alsologtostderr: (4.03026006s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-529713" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.585769386s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-529713
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image load --daemon gcr.io/google-containers/addon-resizer:functional-529713 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 image load --daemon gcr.io/google-containers/addon-resizer:functional-529713 --alsologtostderr: (3.180095631s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-529713" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image save gcr.io/google-containers/addon-resizer:functional-529713 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0307 17:45:08.795222  319509 out.go:291] Setting OutFile to fd 1 ...
	I0307 17:45:08.795839  319509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:45:08.795876  319509 out.go:304] Setting ErrFile to fd 2...
	I0307 17:45:08.795900  319509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:45:08.796185  319509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 17:45:08.796876  319509 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 17:45:08.797063  319509 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 17:45:08.797628  319509 cli_runner.go:164] Run: docker container inspect functional-529713 --format={{.State.Status}}
	I0307 17:45:08.813984  319509 ssh_runner.go:195] Run: systemctl --version
	I0307 17:45:08.814073  319509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529713
	I0307 17:45:08.830345  319509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/functional-529713/id_rsa Username:docker}
	I0307 17:45:08.926990  319509 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0307 17:45:08.927046  319509 cache_images.go:254] Failed to load cached images for profile functional-529713. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0307 17:45:08.927065  319509 cache_images.go:262] succeeded pushing to: 
	I0307 17:45:08.927072  319509 cache_images.go:263] failed pushing to: functional-529713

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (378.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-997124 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0307 18:22:14.699199  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-997124 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m14.30888872s)

                                                
                                                
-- stdout --
	* [old-k8s-version-997124] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-997124" primary control-plane node in "old-k8s-version-997124" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Restarting existing docker container for "old-k8s-version-997124" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-997124 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, dashboard, metrics-server, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:21:46.100792  481658 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:21:46.101029  481658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:21:46.101057  481658 out.go:304] Setting ErrFile to fd 2...
	I0307 18:21:46.101081  481658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:21:46.101357  481658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 18:21:46.101781  481658 out.go:298] Setting JSON to false
	I0307 18:21:46.102780  481658 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7450,"bootTime":1709828256,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 18:21:46.102885  481658 start.go:139] virtualization:  
	I0307 18:21:46.106780  481658 out.go:177] * [old-k8s-version-997124] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 18:21:46.109296  481658 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 18:21:46.111538  481658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:21:46.109348  481658 notify.go:220] Checking for updates...
	I0307 18:21:46.113843  481658 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 18:21:46.115770  481658 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	I0307 18:21:46.118158  481658 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 18:21:46.120134  481658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:21:46.122480  481658 config.go:182] Loaded profile config "old-k8s-version-997124": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 18:21:46.125040  481658 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 18:21:46.126685  481658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:21:46.167205  481658 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 18:21:46.167310  481658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:21:46.267252  481658 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-07 18:21:46.25802398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:21:46.267355  481658 docker.go:295] overlay module found
	I0307 18:21:46.270973  481658 out.go:177] * Using the docker driver based on existing profile
	I0307 18:21:46.272845  481658 start.go:297] selected driver: docker
	I0307 18:21:46.272858  481658 start.go:901] validating driver "docker" against &{Name:old-k8s-version-997124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-997124 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:21:46.272985  481658 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:21:46.273629  481658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:21:46.358055  481658 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-07 18:21:46.349006535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:21:46.358386  481658 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 18:21:46.358440  481658 cni.go:84] Creating CNI manager for ""
	I0307 18:21:46.358451  481658 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 18:21:46.358496  481658 start.go:340] cluster config:
	{Name:old-k8s-version-997124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-997124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:21:46.362535  481658 out.go:177] * Starting "old-k8s-version-997124" primary control-plane node in "old-k8s-version-997124" cluster
	I0307 18:21:46.364723  481658 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 18:21:46.366737  481658 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 18:21:46.368461  481658 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 18:21:46.368512  481658 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0307 18:21:46.368521  481658 cache.go:56] Caching tarball of preloaded images
	I0307 18:21:46.368597  481658 preload.go:173] Found /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 18:21:46.368606  481658 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0307 18:21:46.368720  481658 profile.go:142] Saving config to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/config.json ...
	I0307 18:21:46.368930  481658 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 18:21:46.393167  481658 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 18:21:46.393189  481658 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 18:21:46.393207  481658 cache.go:194] Successfully downloaded all kic artifacts
	I0307 18:21:46.393234  481658 start.go:360] acquireMachinesLock for old-k8s-version-997124: {Name:mk6ebcb99956b7b8944c7e480e81540daeeb837d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:21:46.393297  481658 start.go:364] duration metric: took 39.335µs to acquireMachinesLock for "old-k8s-version-997124"
	I0307 18:21:46.393317  481658 start.go:96] Skipping create...Using existing machine configuration
	I0307 18:21:46.393322  481658 fix.go:54] fixHost starting: 
	I0307 18:21:46.393618  481658 cli_runner.go:164] Run: docker container inspect old-k8s-version-997124 --format={{.State.Status}}
	I0307 18:21:46.416678  481658 fix.go:112] recreateIfNeeded on old-k8s-version-997124: state=Stopped err=<nil>
	W0307 18:21:46.416712  481658 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 18:21:46.419105  481658 out.go:177] * Restarting existing docker container for "old-k8s-version-997124" ...
	I0307 18:21:46.421123  481658 cli_runner.go:164] Run: docker start old-k8s-version-997124
	I0307 18:21:46.752974  481658 cli_runner.go:164] Run: docker container inspect old-k8s-version-997124 --format={{.State.Status}}
	I0307 18:21:46.775915  481658 kic.go:430] container "old-k8s-version-997124" state is running.
	I0307 18:21:46.776293  481658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-997124
	I0307 18:21:46.798955  481658 profile.go:142] Saving config to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/config.json ...
	I0307 18:21:46.799181  481658 machine.go:94] provisionDockerMachine start ...
	I0307 18:21:46.799244  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:46.830801  481658 main.go:141] libmachine: Using SSH client type: native
	I0307 18:21:46.831159  481658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I0307 18:21:46.831191  481658 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 18:21:46.831953  481658 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0307 18:21:49.981094  481658 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-997124
	
	I0307 18:21:49.981131  481658 ubuntu.go:169] provisioning hostname "old-k8s-version-997124"
	I0307 18:21:49.981200  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:50.019128  481658 main.go:141] libmachine: Using SSH client type: native
	I0307 18:21:50.019381  481658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I0307 18:21:50.019399  481658 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-997124 && echo "old-k8s-version-997124" | sudo tee /etc/hostname
	I0307 18:21:50.191454  481658 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-997124
	
	I0307 18:21:50.191539  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:50.219941  481658 main.go:141] libmachine: Using SSH client type: native
	I0307 18:21:50.220188  481658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I0307 18:21:50.220205  481658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-997124' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-997124/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-997124' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 18:21:50.362049  481658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 18:21:50.362133  481658 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18241-280769/.minikube CaCertPath:/home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18241-280769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18241-280769/.minikube}
	I0307 18:21:50.362190  481658 ubuntu.go:177] setting up certificates
	I0307 18:21:50.362218  481658 provision.go:84] configureAuth start
	I0307 18:21:50.362304  481658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-997124
	I0307 18:21:50.390588  481658 provision.go:143] copyHostCerts
	I0307 18:21:50.390652  481658 exec_runner.go:144] found /home/jenkins/minikube-integration/18241-280769/.minikube/ca.pem, removing ...
	I0307 18:21:50.390661  481658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18241-280769/.minikube/ca.pem
	I0307 18:21:50.390740  481658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18241-280769/.minikube/ca.pem (1078 bytes)
	I0307 18:21:50.390852  481658 exec_runner.go:144] found /home/jenkins/minikube-integration/18241-280769/.minikube/cert.pem, removing ...
	I0307 18:21:50.390859  481658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18241-280769/.minikube/cert.pem
	I0307 18:21:50.390896  481658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18241-280769/.minikube/cert.pem (1123 bytes)
	I0307 18:21:50.390960  481658 exec_runner.go:144] found /home/jenkins/minikube-integration/18241-280769/.minikube/key.pem, removing ...
	I0307 18:21:50.390965  481658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18241-280769/.minikube/key.pem
	I0307 18:21:50.390988  481658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18241-280769/.minikube/key.pem (1675 bytes)
	I0307 18:21:50.391042  481658 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18241-280769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-997124 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-997124]
	I0307 18:21:50.863831  481658 provision.go:177] copyRemoteCerts
	I0307 18:21:50.863959  481658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 18:21:50.864031  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:50.895104  481658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/old-k8s-version-997124/id_rsa Username:docker}
	I0307 18:21:51.003166  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 18:21:51.052577  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0307 18:21:51.095013  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 18:21:51.136277  481658 provision.go:87] duration metric: took 774.030091ms to configureAuth
	I0307 18:21:51.136302  481658 ubuntu.go:193] setting minikube options for container-runtime
	I0307 18:21:51.136502  481658 config.go:182] Loaded profile config "old-k8s-version-997124": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 18:21:51.136509  481658 machine.go:97] duration metric: took 4.337321452s to provisionDockerMachine
	I0307 18:21:51.136516  481658 start.go:293] postStartSetup for "old-k8s-version-997124" (driver="docker")
	I0307 18:21:51.136527  481658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 18:21:51.136587  481658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 18:21:51.136626  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:51.177703  481658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/old-k8s-version-997124/id_rsa Username:docker}
	I0307 18:21:51.270797  481658 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 18:21:51.274114  481658 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 18:21:51.274151  481658 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 18:21:51.274162  481658 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 18:21:51.274169  481658 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0307 18:21:51.274183  481658 filesync.go:126] Scanning /home/jenkins/minikube-integration/18241-280769/.minikube/addons for local assets ...
	I0307 18:21:51.274246  481658 filesync.go:126] Scanning /home/jenkins/minikube-integration/18241-280769/.minikube/files for local assets ...
	I0307 18:21:51.274327  481658 filesync.go:149] local asset: /home/jenkins/minikube-integration/18241-280769/.minikube/files/etc/ssl/certs/2861692.pem -> 2861692.pem in /etc/ssl/certs
	I0307 18:21:51.274428  481658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 18:21:51.290965  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/files/etc/ssl/certs/2861692.pem --> /etc/ssl/certs/2861692.pem (1708 bytes)
	I0307 18:21:51.323224  481658 start.go:296] duration metric: took 186.692536ms for postStartSetup
	I0307 18:21:51.323326  481658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:21:51.323399  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:51.361324  481658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/old-k8s-version-997124/id_rsa Username:docker}
	I0307 18:21:51.457862  481658 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 18:21:51.462797  481658 fix.go:56] duration metric: took 5.069467826s for fixHost
	I0307 18:21:51.462824  481658 start.go:83] releasing machines lock for "old-k8s-version-997124", held for 5.069517918s
	I0307 18:21:51.462896  481658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-997124
	I0307 18:21:51.499108  481658 ssh_runner.go:195] Run: cat /version.json
	I0307 18:21:51.499164  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:51.499401  481658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 18:21:51.499459  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:51.542140  481658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/old-k8s-version-997124/id_rsa Username:docker}
	I0307 18:21:51.553609  481658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/old-k8s-version-997124/id_rsa Username:docker}
	I0307 18:21:51.794886  481658 ssh_runner.go:195] Run: systemctl --version
	I0307 18:21:51.801870  481658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 18:21:51.810143  481658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 18:21:51.849037  481658 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 18:21:51.849118  481658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 18:21:51.863869  481658 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0307 18:21:51.863898  481658 start.go:494] detecting cgroup driver to use...
	I0307 18:21:51.863950  481658 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 18:21:51.864026  481658 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 18:21:51.881389  481658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 18:21:51.898663  481658 docker.go:217] disabling cri-docker service (if available) ...
	I0307 18:21:51.898754  481658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 18:21:51.913161  481658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 18:21:51.926237  481658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 18:21:52.079143  481658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 18:21:52.236540  481658 docker.go:233] disabling docker service ...
	I0307 18:21:52.236615  481658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 18:21:52.257846  481658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 18:21:52.275208  481658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 18:21:52.421283  481658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 18:21:52.556601  481658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 18:21:52.570603  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 18:21:52.595891  481658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0307 18:21:52.609330  481658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 18:21:52.625944  481658 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 18:21:52.626022  481658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 18:21:52.639169  481658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:21:52.652381  481658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 18:21:52.661776  481658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 18:21:52.674719  481658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 18:21:52.687840  481658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 18:21:52.700825  481658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 18:21:52.712419  481658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 18:21:52.723514  481658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:21:52.861695  481658 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 18:21:53.134852  481658 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 18:21:53.134978  481658 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 18:21:53.143365  481658 start.go:562] Will wait 60s for crictl version
	I0307 18:21:53.143516  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:21:53.150156  481658 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 18:21:53.235807  481658 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0307 18:21:53.235924  481658 ssh_runner.go:195] Run: containerd --version
	I0307 18:21:53.264142  481658 ssh_runner.go:195] Run: containerd --version
	I0307 18:21:53.299548  481658 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0307 18:21:53.301697  481658 cli_runner.go:164] Run: docker network inspect old-k8s-version-997124 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 18:21:53.326826  481658 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0307 18:21:53.331010  481658 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:21:53.348305  481658 kubeadm.go:877] updating cluster {Name:old-k8s-version-997124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-997124 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 18:21:53.348436  481658 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 18:21:53.348501  481658 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 18:21:53.407293  481658 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 18:21:53.407372  481658 containerd.go:519] Images already preloaded, skipping extraction
	I0307 18:21:53.407466  481658 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 18:21:53.474782  481658 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 18:21:53.474802  481658 cache_images.go:84] Images are preloaded, skipping loading
	I0307 18:21:53.474810  481658 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0307 18:21:53.474934  481658 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-997124 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-997124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 18:21:53.475002  481658 ssh_runner.go:195] Run: sudo crictl info
	I0307 18:21:53.554013  481658 cni.go:84] Creating CNI manager for ""
	I0307 18:21:53.554093  481658 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 18:21:53.554120  481658 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 18:21:53.554167  481658 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-997124 NodeName:old-k8s-version-997124 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0307 18:21:53.554331  481658 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-997124"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 18:21:53.554420  481658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0307 18:21:53.571023  481658 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 18:21:53.571139  481658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 18:21:53.584051  481658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0307 18:21:53.615120  481658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 18:21:53.647749  481658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0307 18:21:53.681990  481658 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0307 18:21:53.685873  481658 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 18:21:53.699641  481658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:21:53.862756  481658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 18:21:53.891931  481658 certs.go:68] Setting up /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124 for IP: 192.168.76.2
	I0307 18:21:53.892003  481658 certs.go:194] generating shared ca certs ...
	I0307 18:21:53.892032  481658 certs.go:226] acquiring lock for ca certs: {Name:mka3b4968cfa6fbc711689192ca27e019bb8f9e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:21:53.892203  481658 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18241-280769/.minikube/ca.key
	I0307 18:21:53.892279  481658 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18241-280769/.minikube/proxy-client-ca.key
	I0307 18:21:53.892315  481658 certs.go:256] generating profile certs ...
	I0307 18:21:53.892455  481658 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.key
	I0307 18:21:53.892559  481658 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/apiserver.key.6585cd14
	I0307 18:21:53.892644  481658 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/proxy-client.key
	I0307 18:21:53.892787  481658 certs.go:484] found cert: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/286169.pem (1338 bytes)
	W0307 18:21:53.892847  481658 certs.go:480] ignoring /home/jenkins/minikube-integration/18241-280769/.minikube/certs/286169_empty.pem, impossibly tiny 0 bytes
	I0307 18:21:53.892871  481658 certs.go:484] found cert: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca-key.pem (1675 bytes)
	I0307 18:21:53.892934  481658 certs.go:484] found cert: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/ca.pem (1078 bytes)
	I0307 18:21:53.892989  481658 certs.go:484] found cert: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/cert.pem (1123 bytes)
	I0307 18:21:53.893047  481658 certs.go:484] found cert: /home/jenkins/minikube-integration/18241-280769/.minikube/certs/key.pem (1675 bytes)
	I0307 18:21:53.893122  481658 certs.go:484] found cert: /home/jenkins/minikube-integration/18241-280769/.minikube/files/etc/ssl/certs/2861692.pem (1708 bytes)
	I0307 18:21:53.893856  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 18:21:53.969104  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 18:21:54.040075  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 18:21:54.087589  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 18:21:54.132924  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 18:21:54.169854  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 18:21:54.221462  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 18:21:54.261576  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 18:21:54.304416  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/files/etc/ssl/certs/2861692.pem --> /usr/share/ca-certificates/2861692.pem (1708 bytes)
	I0307 18:21:54.341246  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 18:21:54.375910  481658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18241-280769/.minikube/certs/286169.pem --> /usr/share/ca-certificates/286169.pem (1338 bytes)
	I0307 18:21:54.403509  481658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 18:21:54.425214  481658 ssh_runner.go:195] Run: openssl version
	I0307 18:21:54.431659  481658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 18:21:54.443696  481658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:21:54.448159  481658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 17:36 /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:21:54.448243  481658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 18:21:54.455701  481658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 18:21:54.465181  481658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286169.pem && ln -fs /usr/share/ca-certificates/286169.pem /etc/ssl/certs/286169.pem"
	I0307 18:21:54.478589  481658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286169.pem
	I0307 18:21:54.482416  481658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 17:42 /usr/share/ca-certificates/286169.pem
	I0307 18:21:54.482492  481658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286169.pem
	I0307 18:21:54.490335  481658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286169.pem /etc/ssl/certs/51391683.0"
	I0307 18:21:54.500469  481658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2861692.pem && ln -fs /usr/share/ca-certificates/2861692.pem /etc/ssl/certs/2861692.pem"
	I0307 18:21:54.514083  481658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2861692.pem
	I0307 18:21:54.518449  481658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 17:42 /usr/share/ca-certificates/2861692.pem
	I0307 18:21:54.518563  481658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2861692.pem
	I0307 18:21:54.526174  481658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2861692.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 18:21:54.538669  481658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 18:21:54.542695  481658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 18:21:54.550074  481658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 18:21:54.557459  481658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 18:21:54.564804  481658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 18:21:54.572118  481658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 18:21:54.579512  481658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 18:21:54.586801  481658 kubeadm.go:391] StartCluster: {Name:old-k8s-version-997124 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-997124 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:21:54.586959  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 18:21:54.587068  481658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 18:21:54.636470  481658 cri.go:89] found id: "b86e52eeb22361c7a14cfa1916f843f00c986ba2b7051069ca3d1a74ad703dff"
	I0307 18:21:54.636511  481658 cri.go:89] found id: "2cdb0e0aa66a40aa76d2231b63c856c9eb5b48c35cb4cff33a924e5a43a1e11a"
	I0307 18:21:54.636517  481658 cri.go:89] found id: "c9136646b51223df919f28ab5f2cdd019c98b7e6ceef00928f0b655c841d52fd"
	I0307 18:21:54.636520  481658 cri.go:89] found id: "c29a3ecc76fa7bd93fbae6f7b38e8541e22835ee5cc9c57b6a9915d68e0b18b8"
	I0307 18:21:54.636524  481658 cri.go:89] found id: "898c4689343ce6ee0040e24b455f19091e2321d35147dae9df221078f3939606"
	I0307 18:21:54.636528  481658 cri.go:89] found id: "06457ed3f36245940d9d2141a131e202fed1b09fa32b05c05cc9590af8e2d636"
	I0307 18:21:54.636531  481658 cri.go:89] found id: "3e264475e0b83d78ac470f436dffd33ec856a060d62cc1c77f91d70bc91a2153"
	I0307 18:21:54.636534  481658 cri.go:89] found id: "1906536812451dae0209018696ef96e213b1944e0d50bcfd220f1a89dbb442d8"
	I0307 18:21:54.636537  481658 cri.go:89] found id: ""
	I0307 18:21:54.636594  481658 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0307 18:21:54.649248  481658 cri.go:116] JSON = null
	W0307 18:21:54.649320  481658 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0307 18:21:54.649391  481658 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 18:21:54.658446  481658 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 18:21:54.658470  481658 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 18:21:54.658490  481658 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 18:21:54.658538  481658 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 18:21:54.667065  481658 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 18:21:54.667592  481658 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-997124" does not appear in /home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 18:21:54.667750  481658 kubeconfig.go:62] /home/jenkins/minikube-integration/18241-280769/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-997124" cluster setting kubeconfig missing "old-k8s-version-997124" context setting]
	I0307 18:21:54.668097  481658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/kubeconfig: {Name:mkb730a03bcec144218b310b25ab397685c133af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:21:54.669887  481658 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 18:21:54.679797  481658 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0307 18:21:54.679840  481658 kubeadm.go:591] duration metric: took 21.343941ms to restartPrimaryControlPlane
	I0307 18:21:54.679851  481658 kubeadm.go:393] duration metric: took 93.059051ms to StartCluster
	I0307 18:21:54.679867  481658 settings.go:142] acquiring lock: {Name:mk7fc8981edba83f2165d6d3660f0909c818732a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:21:54.679948  481658 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 18:21:54.680555  481658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/kubeconfig: {Name:mkb730a03bcec144218b310b25ab397685c133af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:21:54.680756  481658 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 18:21:54.683242  481658 out.go:177] * Verifying Kubernetes components...
	I0307 18:21:54.681125  481658 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 18:21:54.681229  481658 config.go:182] Loaded profile config "old-k8s-version-997124": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 18:21:54.685333  481658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 18:21:54.685426  481658 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-997124"
	I0307 18:21:54.685453  481658 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-997124"
	W0307 18:21:54.685460  481658 addons.go:243] addon storage-provisioner should already be in state true
	I0307 18:21:54.685484  481658 host.go:66] Checking if "old-k8s-version-997124" exists ...
	I0307 18:21:54.686040  481658 cli_runner.go:164] Run: docker container inspect old-k8s-version-997124 --format={{.State.Status}}
	I0307 18:21:54.686286  481658 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-997124"
	I0307 18:21:54.686312  481658 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-997124"
	I0307 18:21:54.686574  481658 cli_runner.go:164] Run: docker container inspect old-k8s-version-997124 --format={{.State.Status}}
	I0307 18:21:54.686869  481658 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-997124"
	I0307 18:21:54.686902  481658 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-997124"
	W0307 18:21:54.686915  481658 addons.go:243] addon metrics-server should already be in state true
	I0307 18:21:54.686940  481658 host.go:66] Checking if "old-k8s-version-997124" exists ...
	I0307 18:21:54.687345  481658 cli_runner.go:164] Run: docker container inspect old-k8s-version-997124 --format={{.State.Status}}
	I0307 18:21:54.688985  481658 addons.go:69] Setting dashboard=true in profile "old-k8s-version-997124"
	I0307 18:21:54.689024  481658 addons.go:234] Setting addon dashboard=true in "old-k8s-version-997124"
	W0307 18:21:54.689030  481658 addons.go:243] addon dashboard should already be in state true
	I0307 18:21:54.689057  481658 host.go:66] Checking if "old-k8s-version-997124" exists ...
	I0307 18:21:54.689469  481658 cli_runner.go:164] Run: docker container inspect old-k8s-version-997124 --format={{.State.Status}}
	I0307 18:21:54.741657  481658 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 18:21:54.745747  481658 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:21:54.745770  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 18:21:54.745834  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:54.761620  481658 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0307 18:21:54.763420  481658 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 18:21:54.763436  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 18:21:54.763503  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:54.769548  481658 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0307 18:21:54.772743  481658 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0307 18:21:54.774819  481658 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0307 18:21:54.774852  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0307 18:21:54.774927  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:54.778848  481658 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-997124"
	W0307 18:21:54.778869  481658 addons.go:243] addon default-storageclass should already be in state true
	I0307 18:21:54.778893  481658 host.go:66] Checking if "old-k8s-version-997124" exists ...
	I0307 18:21:54.779329  481658 cli_runner.go:164] Run: docker container inspect old-k8s-version-997124 --format={{.State.Status}}
	I0307 18:21:54.836882  481658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/old-k8s-version-997124/id_rsa Username:docker}
	I0307 18:21:54.858273  481658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/old-k8s-version-997124/id_rsa Username:docker}
	I0307 18:21:54.860766  481658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/old-k8s-version-997124/id_rsa Username:docker}
	I0307 18:21:54.865199  481658 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 18:21:54.865220  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 18:21:54.865284  481658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-997124
	I0307 18:21:54.884433  481658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/old-k8s-version-997124/id_rsa Username:docker}
	I0307 18:21:54.927020  481658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 18:21:54.972590  481658 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-997124" to be "Ready" ...
	I0307 18:21:55.026992  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:21:55.087088  481658 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 18:21:55.087171  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0307 18:21:55.102531  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 18:21:55.145702  481658 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0307 18:21:55.145768  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0307 18:21:55.172480  481658 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 18:21:55.172567  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 18:21:55.248731  481658 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0307 18:21:55.248799  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0307 18:21:55.304204  481658 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0307 18:21:55.304276  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0307 18:21:55.311000  481658 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 18:21:55.311068  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 18:21:55.340506  481658 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0307 18:21:55.340578  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0307 18:21:55.356132  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 18:21:55.378886  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.378992  481658 retry.go:31] will retry after 316.27717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.383772  481658 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0307 18:21:55.383840  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0307 18:21:55.467639  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.467728  481658 retry.go:31] will retry after 329.851015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.475860  481658 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0307 18:21:55.475934  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0307 18:21:55.495087  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.495169  481658 retry.go:31] will retry after 370.762558ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.499507  481658 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0307 18:21:55.499531  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0307 18:21:55.519207  481658 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0307 18:21:55.519235  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0307 18:21:55.540638  481658 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 18:21:55.540706  481658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0307 18:21:55.559931  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 18:21:55.635073  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.635105  481658 retry.go:31] will retry after 316.450142ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.696268  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:21:55.798597  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 18:21:55.803477  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.803513  481658 retry.go:31] will retry after 310.174453ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.866680  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 18:21:55.882728  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.882761  481658 retry.go:31] will retry after 232.598467ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.952088  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 18:21:55.958163  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:55.958195  481658 retry.go:31] will retry after 305.965519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 18:21:56.033899  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.033935  481658 retry.go:31] will retry after 213.391953ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.114512  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:21:56.115708  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0307 18:21:56.247780  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 18:21:56.263860  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.263939  481658 retry.go:31] will retry after 566.597001ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 18:21:56.264013  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.264056  481658 retry.go:31] will retry after 359.924083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.265245  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 18:21:56.418189  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.418220  481658 retry.go:31] will retry after 757.439434ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 18:21:56.418266  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.418279  481658 retry.go:31] will retry after 829.098862ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.625136  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 18:21:56.702021  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.702065  481658 retry.go:31] will retry after 696.655247ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.831190  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 18:21:56.946067  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.946177  481658 retry.go:31] will retry after 1.264676552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:56.973700  481658 node_ready.go:53] error getting node "old-k8s-version-997124": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-997124": dial tcp 192.168.76.2:8443: connect: connection refused
	I0307 18:21:57.175865  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 18:21:57.248300  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 18:21:57.316793  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:57.316828  481658 retry.go:31] will retry after 914.403265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:57.399117  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 18:21:57.423694  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:57.423726  481658 retry.go:31] will retry after 536.653277ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 18:21:57.535753  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:57.535825  481658 retry.go:31] will retry after 1.371858056s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:57.961051  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 18:21:58.077138  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:58.077219  481658 retry.go:31] will retry after 1.498403113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:58.211591  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:21:58.231915  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 18:21:58.319114  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:58.319151  481658 retry.go:31] will retry after 1.113325279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 18:21:58.337872  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:58.337906  481658 retry.go:31] will retry after 1.206767384s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:58.908041  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 18:21:58.986338  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:58.986370  481658 retry.go:31] will retry after 999.251831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:59.433029  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:21:59.473588  481658 node_ready.go:53] error getting node "old-k8s-version-997124": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-997124": dial tcp 192.168.76.2:8443: connect: connection refused
	W0307 18:21:59.518513  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:59.518544  481658 retry.go:31] will retry after 2.228301721s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:59.545830  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 18:21:59.576108  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 18:21:59.685678  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:59.685755  481658 retry.go:31] will retry after 2.806107484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 18:21:59.724803  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:59.724836  481658 retry.go:31] will retry after 1.283442468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:21:59.986742  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 18:22:00.186351  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:00.186394  481658 retry.go:31] will retry after 3.218095155s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:01.009474  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 18:22:01.090433  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:01.090466  481658 retry.go:31] will retry after 1.529512595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:01.473649  481658 node_ready.go:53] error getting node "old-k8s-version-997124": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-997124": dial tcp 192.168.76.2:8443: connect: connection refused
	I0307 18:22:01.747777  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 18:22:01.825666  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:01.825701  481658 retry.go:31] will retry after 3.439060699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:02.492353  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 18:22:02.620809  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 18:22:02.638838  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:02.638875  481658 retry.go:31] will retry after 2.971060015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 18:22:02.752612  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:02.752645  481658 retry.go:31] will retry after 5.868292462s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:03.404694  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 18:22:03.718023  481658 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:03.718113  481658 retry.go:31] will retry after 3.397698694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 18:22:05.265854  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 18:22:05.610707  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 18:22:07.115970  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0307 18:22:08.621665  481658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 18:22:12.252869  481658 node_ready.go:49] node "old-k8s-version-997124" has status "Ready":"True"
	I0307 18:22:12.252898  481658 node_ready.go:38] duration metric: took 17.280226161s for node "old-k8s-version-997124" to be "Ready" ...
	I0307 18:22:12.252909  481658 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:22:12.690378  481658 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-5sdfx" in "kube-system" namespace to be "Ready" ...
	I0307 18:22:13.101605  481658 pod_ready.go:92] pod "coredns-74ff55c5b-5sdfx" in "kube-system" namespace has status "Ready":"True"
	I0307 18:22:13.101635  481658 pod_ready.go:81] duration metric: took 411.176068ms for pod "coredns-74ff55c5b-5sdfx" in "kube-system" namespace to be "Ready" ...
	I0307 18:22:13.101646  481658 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-997124" in "kube-system" namespace to be "Ready" ...
	I0307 18:22:13.297573  481658 pod_ready.go:92] pod "etcd-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"True"
	I0307 18:22:13.297601  481658 pod_ready.go:81] duration metric: took 195.947771ms for pod "etcd-old-k8s-version-997124" in "kube-system" namespace to be "Ready" ...
	I0307 18:22:13.297641  481658 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-997124" in "kube-system" namespace to be "Ready" ...
	I0307 18:22:15.209037  481658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.943126063s)
	I0307 18:22:15.336247  481658 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:15.702009  481658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.091242475s)
	I0307 18:22:15.704423  481658 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-997124 addons enable metrics-server
	
	I0307 18:22:15.702294  481658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.586292405s)
	I0307 18:22:15.702376  481658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.080618066s)
	I0307 18:22:15.706419  481658 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-997124"
	I0307 18:22:15.717599  481658 out.go:177] * Enabled addons: storage-provisioner, dashboard, metrics-server, default-storageclass
	I0307 18:22:15.719970  481658 addons.go:505] duration metric: took 21.038841491s for enable addons: enabled=[storage-provisioner dashboard metrics-server default-storageclass]
	I0307 18:22:17.804577  481658 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:20.304220  481658 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:21.303321  481658 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"True"
	I0307 18:22:21.303349  481658 pod_ready.go:81] duration metric: took 8.00569099s for pod "kube-apiserver-old-k8s-version-997124" in "kube-system" namespace to be "Ready" ...
	I0307 18:22:21.303361  481658 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace to be "Ready" ...
	I0307 18:22:23.309223  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:25.309917  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:27.311342  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:29.821891  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:32.347942  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:34.818927  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:36.823927  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:39.311760  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:41.810052  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:43.810646  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:45.814694  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:48.311040  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:50.819928  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:53.309924  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:55.312153  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:57.811107  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:22:59.814896  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:02.325088  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:04.818757  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:06.818899  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:09.309745  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:11.309902  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:13.313026  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:15.815836  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:17.817717  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:20.314523  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:22.812465  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:24.827338  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:27.310687  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:29.310720  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:31.814957  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:34.309311  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:36.309579  481658 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:36.813707  481658 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"True"
	I0307 18:23:36.813733  481658 pod_ready.go:81] duration metric: took 1m15.510364297s for pod "kube-controller-manager-old-k8s-version-997124" in "kube-system" namespace to be "Ready" ...
	I0307 18:23:36.813745  481658 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vpvtl" in "kube-system" namespace to be "Ready" ...
	I0307 18:23:36.821929  481658 pod_ready.go:92] pod "kube-proxy-vpvtl" in "kube-system" namespace has status "Ready":"True"
	I0307 18:23:36.821955  481658 pod_ready.go:81] duration metric: took 8.202227ms for pod "kube-proxy-vpvtl" in "kube-system" namespace to be "Ready" ...
	I0307 18:23:36.821965  481658 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-997124" in "kube-system" namespace to be "Ready" ...
	I0307 18:23:36.826350  481658 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-997124" in "kube-system" namespace has status "Ready":"True"
	I0307 18:23:36.826375  481658 pod_ready.go:81] duration metric: took 4.402646ms for pod "kube-scheduler-old-k8s-version-997124" in "kube-system" namespace to be "Ready" ...
	I0307 18:23:36.826386  481658 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace to be "Ready" ...
	I0307 18:23:38.832211  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:41.332977  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:43.333682  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:45.831990  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:48.333505  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:50.831820  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:52.832682  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:55.332752  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:57.831728  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:23:59.832183  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:02.332718  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:04.833037  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:07.333348  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:09.832654  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:12.332080  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:14.333485  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:16.833756  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:19.332545  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:21.831983  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:23.833151  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:25.836454  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:27.836588  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:30.334395  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:32.831950  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:35.332549  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:37.832327  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:40.333378  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:42.832799  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:45.335322  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:47.832205  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:49.832275  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:52.332678  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:54.333153  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:56.831847  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:24:59.333289  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:01.333383  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:03.344865  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:05.832693  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:08.332247  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:10.332883  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:12.831914  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:15.332679  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:17.831541  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:19.832070  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:21.832217  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:23.832935  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:26.332501  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:28.332612  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:30.332715  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:32.834139  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:35.333262  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:37.831812  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:39.833136  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:42.332696  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:44.333103  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:46.832956  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:48.833140  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:50.834060  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:53.332362  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:55.332526  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:57.333248  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:25:59.333444  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:01.831914  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:03.888326  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:06.332564  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:08.832786  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:11.333351  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:13.831951  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:15.832130  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:17.832545  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:19.832672  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:22.332364  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:24.832213  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:27.333179  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:29.832224  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:32.333051  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:34.832609  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:37.331989  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:39.332538  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:41.332893  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:43.832001  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:46.332412  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:48.836223  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:51.333041  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:53.831988  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:55.832766  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:26:57.833486  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:00.336931  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:02.832240  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:04.833466  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:06.834073  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:09.332201  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:11.333931  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:13.831906  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:15.832422  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:17.833223  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:20.332507  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:22.333106  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:24.831870  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:26.832488  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:29.335840  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:31.831635  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:33.831777  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:35.832641  481658 pod_ready.go:102] pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace has status "Ready":"False"
	I0307 18:27:36.832916  481658 pod_ready.go:81] duration metric: took 4m0.006515201s for pod "metrics-server-9975d5f86-5lkvw" in "kube-system" namespace to be "Ready" ...
	E0307 18:27:36.832943  481658 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0307 18:27:36.832952  481658 pod_ready.go:38] duration metric: took 5m24.580033358s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 18:27:36.832968  481658 api_server.go:52] waiting for apiserver process to appear ...
	I0307 18:27:36.832996  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:27:36.833058  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:27:36.878567  481658 cri.go:89] found id: "c1cb177381c7a688769b003a1d26b5ee0fd7127d1123189c47031eac05e7f152"
	I0307 18:27:36.878588  481658 cri.go:89] found id: "06457ed3f36245940d9d2141a131e202fed1b09fa32b05c05cc9590af8e2d636"
	I0307 18:27:36.878593  481658 cri.go:89] found id: ""
	I0307 18:27:36.878600  481658 logs.go:276] 2 containers: [c1cb177381c7a688769b003a1d26b5ee0fd7127d1123189c47031eac05e7f152 06457ed3f36245940d9d2141a131e202fed1b09fa32b05c05cc9590af8e2d636]
	I0307 18:27:36.878656  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:36.882911  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:36.886484  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:27:36.886564  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:27:36.927844  481658 cri.go:89] found id: "1b1b959631ce8b5df7028c1e7b6681c87e712d655c39b2aa198cd40b0b022d26"
	I0307 18:27:36.927872  481658 cri.go:89] found id: "898c4689343ce6ee0040e24b455f19091e2321d35147dae9df221078f3939606"
	I0307 18:27:36.927878  481658 cri.go:89] found id: ""
	I0307 18:27:36.927886  481658 logs.go:276] 2 containers: [1b1b959631ce8b5df7028c1e7b6681c87e712d655c39b2aa198cd40b0b022d26 898c4689343ce6ee0040e24b455f19091e2321d35147dae9df221078f3939606]
	I0307 18:27:36.927951  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:36.931637  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:36.935244  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:27:36.935316  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:27:36.973901  481658 cri.go:89] found id: "17ab22ea054436fa3fdef5c62c348afd1cde0410fcfc9ffb077296127319e4c6"
	I0307 18:27:36.973931  481658 cri.go:89] found id: "b86e52eeb22361c7a14cfa1916f843f00c986ba2b7051069ca3d1a74ad703dff"
	I0307 18:27:36.973938  481658 cri.go:89] found id: ""
	I0307 18:27:36.973946  481658 logs.go:276] 2 containers: [17ab22ea054436fa3fdef5c62c348afd1cde0410fcfc9ffb077296127319e4c6 b86e52eeb22361c7a14cfa1916f843f00c986ba2b7051069ca3d1a74ad703dff]
	I0307 18:27:36.974003  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:36.977634  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:36.981200  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:27:36.981297  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:27:37.033136  481658 cri.go:89] found id: "e2dabd79971edca60d2b8d41e3b8da7866545ebce6a604ca0d4b7e7199c2d7ee"
	I0307 18:27:37.033162  481658 cri.go:89] found id: "1906536812451dae0209018696ef96e213b1944e0d50bcfd220f1a89dbb442d8"
	I0307 18:27:37.033167  481658 cri.go:89] found id: ""
	I0307 18:27:37.033175  481658 logs.go:276] 2 containers: [e2dabd79971edca60d2b8d41e3b8da7866545ebce6a604ca0d4b7e7199c2d7ee 1906536812451dae0209018696ef96e213b1944e0d50bcfd220f1a89dbb442d8]
	I0307 18:27:37.033235  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.037777  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.042385  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:27:37.042476  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:27:37.084488  481658 cri.go:89] found id: "770cf84d962d4a1cc05b3c05c8369e1291bce1e0e4c7957471886372dcac7a95"
	I0307 18:27:37.084511  481658 cri.go:89] found id: "c9136646b51223df919f28ab5f2cdd019c98b7e6ceef00928f0b655c841d52fd"
	I0307 18:27:37.084517  481658 cri.go:89] found id: ""
	I0307 18:27:37.084524  481658 logs.go:276] 2 containers: [770cf84d962d4a1cc05b3c05c8369e1291bce1e0e4c7957471886372dcac7a95 c9136646b51223df919f28ab5f2cdd019c98b7e6ceef00928f0b655c841d52fd]
	I0307 18:27:37.084588  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.088420  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.092123  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:27:37.092199  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:27:37.149088  481658 cri.go:89] found id: "30875f06dae118448845b68fdfe08ca3d0edd4dfac7bf2be7539ee1b7a374d96"
	I0307 18:27:37.149113  481658 cri.go:89] found id: "3e264475e0b83d78ac470f436dffd33ec856a060d62cc1c77f91d70bc91a2153"
	I0307 18:27:37.149118  481658 cri.go:89] found id: ""
	I0307 18:27:37.149125  481658 logs.go:276] 2 containers: [30875f06dae118448845b68fdfe08ca3d0edd4dfac7bf2be7539ee1b7a374d96 3e264475e0b83d78ac470f436dffd33ec856a060d62cc1c77f91d70bc91a2153]
	I0307 18:27:37.149182  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.153302  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.156913  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:27:37.156989  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:27:37.203741  481658 cri.go:89] found id: "9a553e3c5421ce4d00dda3c529a447fc62543341feb6c6dc7f640f3bd283099c"
	I0307 18:27:37.203763  481658 cri.go:89] found id: "c29a3ecc76fa7bd93fbae6f7b38e8541e22835ee5cc9c57b6a9915d68e0b18b8"
	I0307 18:27:37.203768  481658 cri.go:89] found id: ""
	I0307 18:27:37.203776  481658 logs.go:276] 2 containers: [9a553e3c5421ce4d00dda3c529a447fc62543341feb6c6dc7f640f3bd283099c c29a3ecc76fa7bd93fbae6f7b38e8541e22835ee5cc9c57b6a9915d68e0b18b8]
	I0307 18:27:37.203831  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.207464  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.211091  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 18:27:37.211177  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 18:27:37.249140  481658 cri.go:89] found id: "8f97ed067438fc85708bc7a18384b09b2d3167a4c39a1c46965e253340d97921"
	I0307 18:27:37.249179  481658 cri.go:89] found id: ""
	I0307 18:27:37.249188  481658 logs.go:276] 1 containers: [8f97ed067438fc85708bc7a18384b09b2d3167a4c39a1c46965e253340d97921]
	I0307 18:27:37.249294  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.252755  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:27:37.252873  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:27:37.295384  481658 cri.go:89] found id: "be3446c2bfd0292db840ea5a2a17510480729b9c274061d29363e74891207ca0"
	I0307 18:27:37.295408  481658 cri.go:89] found id: "ab24de38e7a1226293b9ecafb0709a5690ff94759670b44570f77449e8b93d8d"
	I0307 18:27:37.295413  481658 cri.go:89] found id: ""
	I0307 18:27:37.295420  481658 logs.go:276] 2 containers: [be3446c2bfd0292db840ea5a2a17510480729b9c274061d29363e74891207ca0 ab24de38e7a1226293b9ecafb0709a5690ff94759670b44570f77449e8b93d8d]
	I0307 18:27:37.295493  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.298999  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:37.302263  481658 logs.go:123] Gathering logs for dmesg ...
	I0307 18:27:37.302298  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:27:37.320701  481658 logs.go:123] Gathering logs for etcd [1b1b959631ce8b5df7028c1e7b6681c87e712d655c39b2aa198cd40b0b022d26] ...
	I0307 18:27:37.320730  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b1b959631ce8b5df7028c1e7b6681c87e712d655c39b2aa198cd40b0b022d26"
	I0307 18:27:37.364698  481658 logs.go:123] Gathering logs for etcd [898c4689343ce6ee0040e24b455f19091e2321d35147dae9df221078f3939606] ...
	I0307 18:27:37.364727  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 898c4689343ce6ee0040e24b455f19091e2321d35147dae9df221078f3939606"
	I0307 18:27:37.407839  481658 logs.go:123] Gathering logs for coredns [17ab22ea054436fa3fdef5c62c348afd1cde0410fcfc9ffb077296127319e4c6] ...
	I0307 18:27:37.407869  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ab22ea054436fa3fdef5c62c348afd1cde0410fcfc9ffb077296127319e4c6"
	I0307 18:27:37.447683  481658 logs.go:123] Gathering logs for kube-proxy [c9136646b51223df919f28ab5f2cdd019c98b7e6ceef00928f0b655c841d52fd] ...
	I0307 18:27:37.447758  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9136646b51223df919f28ab5f2cdd019c98b7e6ceef00928f0b655c841d52fd"
	I0307 18:27:37.493708  481658 logs.go:123] Gathering logs for storage-provisioner [be3446c2bfd0292db840ea5a2a17510480729b9c274061d29363e74891207ca0] ...
	I0307 18:27:37.493740  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be3446c2bfd0292db840ea5a2a17510480729b9c274061d29363e74891207ca0"
	I0307 18:27:37.544967  481658 logs.go:123] Gathering logs for container status ...
	I0307 18:27:37.544994  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:27:37.606423  481658 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:27:37.606452  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 18:27:37.773556  481658 logs.go:123] Gathering logs for coredns [b86e52eeb22361c7a14cfa1916f843f00c986ba2b7051069ca3d1a74ad703dff] ...
	I0307 18:27:37.773590  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b86e52eeb22361c7a14cfa1916f843f00c986ba2b7051069ca3d1a74ad703dff"
	I0307 18:27:37.822990  481658 logs.go:123] Gathering logs for kube-controller-manager [3e264475e0b83d78ac470f436dffd33ec856a060d62cc1c77f91d70bc91a2153] ...
	I0307 18:27:37.823018  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e264475e0b83d78ac470f436dffd33ec856a060d62cc1c77f91d70bc91a2153"
	I0307 18:27:37.880794  481658 logs.go:123] Gathering logs for kindnet [9a553e3c5421ce4d00dda3c529a447fc62543341feb6c6dc7f640f3bd283099c] ...
	I0307 18:27:37.880872  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a553e3c5421ce4d00dda3c529a447fc62543341feb6c6dc7f640f3bd283099c"
	I0307 18:27:37.925699  481658 logs.go:123] Gathering logs for kindnet [c29a3ecc76fa7bd93fbae6f7b38e8541e22835ee5cc9c57b6a9915d68e0b18b8] ...
	I0307 18:27:37.925731  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c29a3ecc76fa7bd93fbae6f7b38e8541e22835ee5cc9c57b6a9915d68e0b18b8"
	I0307 18:27:37.971282  481658 logs.go:123] Gathering logs for kubernetes-dashboard [8f97ed067438fc85708bc7a18384b09b2d3167a4c39a1c46965e253340d97921] ...
	I0307 18:27:37.971312  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f97ed067438fc85708bc7a18384b09b2d3167a4c39a1c46965e253340d97921"
	I0307 18:27:38.014379  481658 logs.go:123] Gathering logs for storage-provisioner [ab24de38e7a1226293b9ecafb0709a5690ff94759670b44570f77449e8b93d8d] ...
	I0307 18:27:38.014413  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab24de38e7a1226293b9ecafb0709a5690ff94759670b44570f77449e8b93d8d"
	I0307 18:27:38.070023  481658 logs.go:123] Gathering logs for kubelet ...
	I0307 18:27:38.070052  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 18:27:38.120492  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.120987     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-5n7jf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-5n7jf" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:38.120849  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.156984     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:38.121065  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.179016     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:38.121276  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.179140     665 reflector.go:138] object-"kube-system"/"kindnet-token-jw527": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jw527" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:38.121484  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.179236     665 reflector.go:138] object-"kube-system"/"coredns-token-bg2bj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-bg2bj" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:38.121728  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.213994     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-5tgwz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-5tgwz" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:38.121940  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.214690     665 reflector.go:138] object-"default"/"default-token-fjlgc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-fjlgc" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:38.122163  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.215002     665 reflector.go:138] object-"kube-system"/"metrics-server-token-ftzs5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-ftzs5" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:38.132804  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:15 old-k8s-version-997124 kubelet[665]: E0307 18:22:15.023788     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 18:27:38.132996  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:15 old-k8s-version-997124 kubelet[665]: E0307 18:22:15.886669     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.135746  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:26 old-k8s-version-997124 kubelet[665]: E0307 18:22:26.695069     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 18:27:38.137401  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:37 old-k8s-version-997124 kubelet[665]: E0307 18:22:37.728719     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.138016  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:39 old-k8s-version-997124 kubelet[665]: E0307 18:22:39.022340     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.138348  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:40 old-k8s-version-997124 kubelet[665]: E0307 18:22:40.044128     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.139115  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:46 old-k8s-version-997124 kubelet[665]: E0307 18:22:46.056674     665 pod_workers.go:191] Error syncing pod 5e4df77f-71eb-4016-b5c4-5be9ff541504 ("storage-provisioner_kube-system(5e4df77f-71eb-4016-b5c4-5be9ff541504)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e4df77f-71eb-4016-b5c4-5be9ff541504)"
	W0307 18:27:38.139440  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:47 old-k8s-version-997124 kubelet[665]: E0307 18:22:47.215781     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.141911  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:49 old-k8s-version-997124 kubelet[665]: E0307 18:22:49.693115     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 18:27:38.142973  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:59 old-k8s-version-997124 kubelet[665]: E0307 18:22:59.090209     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.143155  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:03 old-k8s-version-997124 kubelet[665]: E0307 18:23:03.685838     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.143485  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:07 old-k8s-version-997124 kubelet[665]: E0307 18:23:07.216413     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.143667  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:14 old-k8s-version-997124 kubelet[665]: E0307 18:23:14.689214     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.144263  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:22 old-k8s-version-997124 kubelet[665]: E0307 18:23:22.154549     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.144444  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:25 old-k8s-version-997124 kubelet[665]: E0307 18:23:25.686505     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.144771  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:27 old-k8s-version-997124 kubelet[665]: E0307 18:23:27.216415     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.147205  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:39 old-k8s-version-997124 kubelet[665]: E0307 18:23:39.697016     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 18:27:38.147535  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:42 old-k8s-version-997124 kubelet[665]: E0307 18:23:42.685916     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.147718  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:50 old-k8s-version-997124 kubelet[665]: E0307 18:23:50.686144     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.148078  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:57 old-k8s-version-997124 kubelet[665]: E0307 18:23:57.685609     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.148260  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:03 old-k8s-version-997124 kubelet[665]: E0307 18:24:03.697443     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.148841  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:12 old-k8s-version-997124 kubelet[665]: E0307 18:24:12.264977     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.149025  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:15 old-k8s-version-997124 kubelet[665]: E0307 18:24:15.685952     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.149370  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:17 old-k8s-version-997124 kubelet[665]: E0307 18:24:17.215724     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.149559  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:27 old-k8s-version-997124 kubelet[665]: E0307 18:24:27.685837     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.149886  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:30 old-k8s-version-997124 kubelet[665]: E0307 18:24:30.685953     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.150069  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:38 old-k8s-version-997124 kubelet[665]: E0307 18:24:38.686907     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.150395  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:41 old-k8s-version-997124 kubelet[665]: E0307 18:24:41.685425     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.150720  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:49 old-k8s-version-997124 kubelet[665]: E0307 18:24:49.685896     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.151059  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:52 old-k8s-version-997124 kubelet[665]: E0307 18:24:52.685835     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.153492  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:03 old-k8s-version-997124 kubelet[665]: E0307 18:25:03.693908     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 18:27:38.153829  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:06 old-k8s-version-997124 kubelet[665]: E0307 18:25:06.686276     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.154013  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:15 old-k8s-version-997124 kubelet[665]: E0307 18:25:15.685946     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.154344  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:18 old-k8s-version-997124 kubelet[665]: E0307 18:25:18.686156     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.154668  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:29 old-k8s-version-997124 kubelet[665]: E0307 18:25:29.685472     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.154849  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:29 old-k8s-version-997124 kubelet[665]: E0307 18:25:29.686743     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.155437  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:42 old-k8s-version-997124 kubelet[665]: E0307 18:25:42.454082     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.155619  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:43 old-k8s-version-997124 kubelet[665]: E0307 18:25:43.686478     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.155942  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:47 old-k8s-version-997124 kubelet[665]: E0307 18:25:47.215963     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.156123  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:57 old-k8s-version-997124 kubelet[665]: E0307 18:25:57.685865     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.156446  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:58 old-k8s-version-997124 kubelet[665]: E0307 18:25:58.685718     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.156627  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:09 old-k8s-version-997124 kubelet[665]: E0307 18:26:09.685929     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.156950  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:12 old-k8s-version-997124 kubelet[665]: E0307 18:26:12.685855     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.157131  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:22 old-k8s-version-997124 kubelet[665]: E0307 18:26:22.686312     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.157458  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:24 old-k8s-version-997124 kubelet[665]: E0307 18:26:24.686072     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.157649  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:35 old-k8s-version-997124 kubelet[665]: E0307 18:26:35.685894     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.157974  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:39 old-k8s-version-997124 kubelet[665]: E0307 18:26:39.685488     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.158162  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:48 old-k8s-version-997124 kubelet[665]: E0307 18:26:48.685970     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.158486  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:50 old-k8s-version-997124 kubelet[665]: E0307 18:26:50.690172     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.158668  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:03 old-k8s-version-997124 kubelet[665]: E0307 18:27:03.685858     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.158991  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:05 old-k8s-version-997124 kubelet[665]: E0307 18:27:05.685818     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.159174  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:14 old-k8s-version-997124 kubelet[665]: E0307 18:27:14.685860     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.159499  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:19 old-k8s-version-997124 kubelet[665]: E0307 18:27:19.685511     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.159681  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:26 old-k8s-version-997124 kubelet[665]: E0307 18:27:26.685882     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.160007  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:34 old-k8s-version-997124 kubelet[665]: E0307 18:27:34.688803     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	I0307 18:27:38.160018  481658 logs.go:123] Gathering logs for kube-apiserver [c1cb177381c7a688769b003a1d26b5ee0fd7127d1123189c47031eac05e7f152] ...
	I0307 18:27:38.160032  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1cb177381c7a688769b003a1d26b5ee0fd7127d1123189c47031eac05e7f152"
	I0307 18:27:38.222938  481658 logs.go:123] Gathering logs for kube-apiserver [06457ed3f36245940d9d2141a131e202fed1b09fa32b05c05cc9590af8e2d636] ...
	I0307 18:27:38.222974  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06457ed3f36245940d9d2141a131e202fed1b09fa32b05c05cc9590af8e2d636"
	I0307 18:27:38.280962  481658 logs.go:123] Gathering logs for kube-scheduler [e2dabd79971edca60d2b8d41e3b8da7866545ebce6a604ca0d4b7e7199c2d7ee] ...
	I0307 18:27:38.280995  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2dabd79971edca60d2b8d41e3b8da7866545ebce6a604ca0d4b7e7199c2d7ee"
	I0307 18:27:38.319577  481658 logs.go:123] Gathering logs for kube-scheduler [1906536812451dae0209018696ef96e213b1944e0d50bcfd220f1a89dbb442d8] ...
	I0307 18:27:38.319649  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1906536812451dae0209018696ef96e213b1944e0d50bcfd220f1a89dbb442d8"
	I0307 18:27:38.365556  481658 logs.go:123] Gathering logs for kube-proxy [770cf84d962d4a1cc05b3c05c8369e1291bce1e0e4c7957471886372dcac7a95] ...
	I0307 18:27:38.365604  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 770cf84d962d4a1cc05b3c05c8369e1291bce1e0e4c7957471886372dcac7a95"
	I0307 18:27:38.403664  481658 logs.go:123] Gathering logs for kube-controller-manager [30875f06dae118448845b68fdfe08ca3d0edd4dfac7bf2be7539ee1b7a374d96] ...
	I0307 18:27:38.403696  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30875f06dae118448845b68fdfe08ca3d0edd4dfac7bf2be7539ee1b7a374d96"
	I0307 18:27:38.460363  481658 logs.go:123] Gathering logs for containerd ...
	I0307 18:27:38.460411  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:27:38.522786  481658 out.go:304] Setting ErrFile to fd 2...
	I0307 18:27:38.522818  481658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 18:27:38.522892  481658 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 18:27:38.522903  481658 out.go:239]   Mar 07 18:27:05 old-k8s-version-997124 kubelet[665]: E0307 18:27:05.685818     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	  Mar 07 18:27:05 old-k8s-version-997124 kubelet[665]: E0307 18:27:05.685818     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.522916  481658 out.go:239]   Mar 07 18:27:14 old-k8s-version-997124 kubelet[665]: E0307 18:27:14.685860     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 18:27:14 old-k8s-version-997124 kubelet[665]: E0307 18:27:14.685860     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.522946  481658 out.go:239]   Mar 07 18:27:19 old-k8s-version-997124 kubelet[665]: E0307 18:27:19.685511     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	  Mar 07 18:27:19 old-k8s-version-997124 kubelet[665]: E0307 18:27:19.685511     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:38.522953  481658 out.go:239]   Mar 07 18:27:26 old-k8s-version-997124 kubelet[665]: E0307 18:27:26.685882     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 18:27:26 old-k8s-version-997124 kubelet[665]: E0307 18:27:26.685882     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:38.522978  481658 out.go:239]   Mar 07 18:27:34 old-k8s-version-997124 kubelet[665]: E0307 18:27:34.688803     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	  Mar 07 18:27:34 old-k8s-version-997124 kubelet[665]: E0307 18:27:34.688803     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	I0307 18:27:38.522984  481658 out.go:304] Setting ErrFile to fd 2...
	I0307 18:27:38.522998  481658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:27:48.524723  481658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:27:48.538067  481658 api_server.go:72] duration metric: took 5m53.857276304s to wait for apiserver process to appear ...
	I0307 18:27:48.538117  481658 api_server.go:88] waiting for apiserver healthz status ...
	I0307 18:27:48.538154  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 18:27:48.538212  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 18:27:48.575846  481658 cri.go:89] found id: "c1cb177381c7a688769b003a1d26b5ee0fd7127d1123189c47031eac05e7f152"
	I0307 18:27:48.575868  481658 cri.go:89] found id: "06457ed3f36245940d9d2141a131e202fed1b09fa32b05c05cc9590af8e2d636"
	I0307 18:27:48.575873  481658 cri.go:89] found id: ""
	I0307 18:27:48.575880  481658 logs.go:276] 2 containers: [c1cb177381c7a688769b003a1d26b5ee0fd7127d1123189c47031eac05e7f152 06457ed3f36245940d9d2141a131e202fed1b09fa32b05c05cc9590af8e2d636]
	I0307 18:27:48.575935  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.579572  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.583117  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 18:27:48.583186  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 18:27:48.619311  481658 cri.go:89] found id: "1b1b959631ce8b5df7028c1e7b6681c87e712d655c39b2aa198cd40b0b022d26"
	I0307 18:27:48.619332  481658 cri.go:89] found id: "898c4689343ce6ee0040e24b455f19091e2321d35147dae9df221078f3939606"
	I0307 18:27:48.619337  481658 cri.go:89] found id: ""
	I0307 18:27:48.619345  481658 logs.go:276] 2 containers: [1b1b959631ce8b5df7028c1e7b6681c87e712d655c39b2aa198cd40b0b022d26 898c4689343ce6ee0040e24b455f19091e2321d35147dae9df221078f3939606]
	I0307 18:27:48.619412  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.623008  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.626642  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 18:27:48.626762  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 18:27:48.666123  481658 cri.go:89] found id: "17ab22ea054436fa3fdef5c62c348afd1cde0410fcfc9ffb077296127319e4c6"
	I0307 18:27:48.666152  481658 cri.go:89] found id: "b86e52eeb22361c7a14cfa1916f843f00c986ba2b7051069ca3d1a74ad703dff"
	I0307 18:27:48.666157  481658 cri.go:89] found id: ""
	I0307 18:27:48.666165  481658 logs.go:276] 2 containers: [17ab22ea054436fa3fdef5c62c348afd1cde0410fcfc9ffb077296127319e4c6 b86e52eeb22361c7a14cfa1916f843f00c986ba2b7051069ca3d1a74ad703dff]
	I0307 18:27:48.666223  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.669909  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.673303  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 18:27:48.673429  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 18:27:48.738623  481658 cri.go:89] found id: "e2dabd79971edca60d2b8d41e3b8da7866545ebce6a604ca0d4b7e7199c2d7ee"
	I0307 18:27:48.738646  481658 cri.go:89] found id: "1906536812451dae0209018696ef96e213b1944e0d50bcfd220f1a89dbb442d8"
	I0307 18:27:48.738651  481658 cri.go:89] found id: ""
	I0307 18:27:48.738658  481658 logs.go:276] 2 containers: [e2dabd79971edca60d2b8d41e3b8da7866545ebce6a604ca0d4b7e7199c2d7ee 1906536812451dae0209018696ef96e213b1944e0d50bcfd220f1a89dbb442d8]
	I0307 18:27:48.738712  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.743449  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.747704  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 18:27:48.747812  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 18:27:48.792163  481658 cri.go:89] found id: "770cf84d962d4a1cc05b3c05c8369e1291bce1e0e4c7957471886372dcac7a95"
	I0307 18:27:48.792186  481658 cri.go:89] found id: "c9136646b51223df919f28ab5f2cdd019c98b7e6ceef00928f0b655c841d52fd"
	I0307 18:27:48.792192  481658 cri.go:89] found id: ""
	I0307 18:27:48.792199  481658 logs.go:276] 2 containers: [770cf84d962d4a1cc05b3c05c8369e1291bce1e0e4c7957471886372dcac7a95 c9136646b51223df919f28ab5f2cdd019c98b7e6ceef00928f0b655c841d52fd]
	I0307 18:27:48.792282  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.796583  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.799815  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 18:27:48.799898  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 18:27:48.847441  481658 cri.go:89] found id: "30875f06dae118448845b68fdfe08ca3d0edd4dfac7bf2be7539ee1b7a374d96"
	I0307 18:27:48.847513  481658 cri.go:89] found id: "3e264475e0b83d78ac470f436dffd33ec856a060d62cc1c77f91d70bc91a2153"
	I0307 18:27:48.847534  481658 cri.go:89] found id: ""
	I0307 18:27:48.847569  481658 logs.go:276] 2 containers: [30875f06dae118448845b68fdfe08ca3d0edd4dfac7bf2be7539ee1b7a374d96 3e264475e0b83d78ac470f436dffd33ec856a060d62cc1c77f91d70bc91a2153]
	I0307 18:27:48.847648  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.851462  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.854703  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 18:27:48.854768  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 18:27:48.907272  481658 cri.go:89] found id: "9a553e3c5421ce4d00dda3c529a447fc62543341feb6c6dc7f640f3bd283099c"
	I0307 18:27:48.907294  481658 cri.go:89] found id: "c29a3ecc76fa7bd93fbae6f7b38e8541e22835ee5cc9c57b6a9915d68e0b18b8"
	I0307 18:27:48.907300  481658 cri.go:89] found id: ""
	I0307 18:27:48.907307  481658 logs.go:276] 2 containers: [9a553e3c5421ce4d00dda3c529a447fc62543341feb6c6dc7f640f3bd283099c c29a3ecc76fa7bd93fbae6f7b38e8541e22835ee5cc9c57b6a9915d68e0b18b8]
	I0307 18:27:48.907361  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.910877  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.914636  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 18:27:48.914755  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 18:27:48.956484  481658 cri.go:89] found id: "8f97ed067438fc85708bc7a18384b09b2d3167a4c39a1c46965e253340d97921"
	I0307 18:27:48.956507  481658 cri.go:89] found id: ""
	I0307 18:27:48.956515  481658 logs.go:276] 1 containers: [8f97ed067438fc85708bc7a18384b09b2d3167a4c39a1c46965e253340d97921]
	I0307 18:27:48.956580  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:48.960504  481658 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 18:27:48.960627  481658 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 18:27:49.010662  481658 cri.go:89] found id: "be3446c2bfd0292db840ea5a2a17510480729b9c274061d29363e74891207ca0"
	I0307 18:27:49.010684  481658 cri.go:89] found id: "ab24de38e7a1226293b9ecafb0709a5690ff94759670b44570f77449e8b93d8d"
	I0307 18:27:49.010689  481658 cri.go:89] found id: ""
	I0307 18:27:49.010696  481658 logs.go:276] 2 containers: [be3446c2bfd0292db840ea5a2a17510480729b9c274061d29363e74891207ca0 ab24de38e7a1226293b9ecafb0709a5690ff94759670b44570f77449e8b93d8d]
	I0307 18:27:49.010753  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:49.014989  481658 ssh_runner.go:195] Run: which crictl
	I0307 18:27:49.018674  481658 logs.go:123] Gathering logs for kube-proxy [c9136646b51223df919f28ab5f2cdd019c98b7e6ceef00928f0b655c841d52fd] ...
	I0307 18:27:49.018749  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9136646b51223df919f28ab5f2cdd019c98b7e6ceef00928f0b655c841d52fd"
	I0307 18:27:49.057784  481658 logs.go:123] Gathering logs for kube-controller-manager [3e264475e0b83d78ac470f436dffd33ec856a060d62cc1c77f91d70bc91a2153] ...
	I0307 18:27:49.057817  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e264475e0b83d78ac470f436dffd33ec856a060d62cc1c77f91d70bc91a2153"
	I0307 18:27:49.110428  481658 logs.go:123] Gathering logs for kindnet [c29a3ecc76fa7bd93fbae6f7b38e8541e22835ee5cc9c57b6a9915d68e0b18b8] ...
	I0307 18:27:49.110464  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c29a3ecc76fa7bd93fbae6f7b38e8541e22835ee5cc9c57b6a9915d68e0b18b8"
	I0307 18:27:49.150964  481658 logs.go:123] Gathering logs for dmesg ...
	I0307 18:27:49.151032  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 18:27:49.172602  481658 logs.go:123] Gathering logs for kube-apiserver [06457ed3f36245940d9d2141a131e202fed1b09fa32b05c05cc9590af8e2d636] ...
	I0307 18:27:49.172658  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06457ed3f36245940d9d2141a131e202fed1b09fa32b05c05cc9590af8e2d636"
	I0307 18:27:49.239777  481658 logs.go:123] Gathering logs for coredns [17ab22ea054436fa3fdef5c62c348afd1cde0410fcfc9ffb077296127319e4c6] ...
	I0307 18:27:49.239810  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ab22ea054436fa3fdef5c62c348afd1cde0410fcfc9ffb077296127319e4c6"
	I0307 18:27:49.283221  481658 logs.go:123] Gathering logs for coredns [b86e52eeb22361c7a14cfa1916f843f00c986ba2b7051069ca3d1a74ad703dff] ...
	I0307 18:27:49.283249  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b86e52eeb22361c7a14cfa1916f843f00c986ba2b7051069ca3d1a74ad703dff"
	I0307 18:27:49.320463  481658 logs.go:123] Gathering logs for kube-scheduler [e2dabd79971edca60d2b8d41e3b8da7866545ebce6a604ca0d4b7e7199c2d7ee] ...
	I0307 18:27:49.320493  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2dabd79971edca60d2b8d41e3b8da7866545ebce6a604ca0d4b7e7199c2d7ee"
	I0307 18:27:49.360528  481658 logs.go:123] Gathering logs for storage-provisioner [be3446c2bfd0292db840ea5a2a17510480729b9c274061d29363e74891207ca0] ...
	I0307 18:27:49.360615  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be3446c2bfd0292db840ea5a2a17510480729b9c274061d29363e74891207ca0"
	I0307 18:27:49.399745  481658 logs.go:123] Gathering logs for storage-provisioner [ab24de38e7a1226293b9ecafb0709a5690ff94759670b44570f77449e8b93d8d] ...
	I0307 18:27:49.399824  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab24de38e7a1226293b9ecafb0709a5690ff94759670b44570f77449e8b93d8d"
	I0307 18:27:49.443334  481658 logs.go:123] Gathering logs for kube-apiserver [c1cb177381c7a688769b003a1d26b5ee0fd7127d1123189c47031eac05e7f152] ...
	I0307 18:27:49.443364  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1cb177381c7a688769b003a1d26b5ee0fd7127d1123189c47031eac05e7f152"
	I0307 18:27:49.500646  481658 logs.go:123] Gathering logs for kube-proxy [770cf84d962d4a1cc05b3c05c8369e1291bce1e0e4c7957471886372dcac7a95] ...
	I0307 18:27:49.500679  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 770cf84d962d4a1cc05b3c05c8369e1291bce1e0e4c7957471886372dcac7a95"
	I0307 18:27:49.540970  481658 logs.go:123] Gathering logs for kubelet ...
	I0307 18:27:49.540997  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 18:27:49.593338  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.120987     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-5n7jf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-5n7jf" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:49.593656  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.156984     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:49.593859  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.179016     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:49.594078  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.179140     665 reflector.go:138] object-"kube-system"/"kindnet-token-jw527": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jw527" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:49.594295  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.179236     665 reflector.go:138] object-"kube-system"/"coredns-token-bg2bj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-bg2bj" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:49.594525  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.213994     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-5tgwz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-5tgwz" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:49.594733  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.214690     665 reflector.go:138] object-"default"/"default-token-fjlgc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-fjlgc" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:49.594953  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:12 old-k8s-version-997124 kubelet[665]: E0307 18:22:12.215002     665 reflector.go:138] object-"kube-system"/"metrics-server-token-ftzs5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-ftzs5" is forbidden: User "system:node:old-k8s-version-997124" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-997124' and this object
	W0307 18:27:49.605635  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:15 old-k8s-version-997124 kubelet[665]: E0307 18:22:15.023788     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 18:27:49.605823  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:15 old-k8s-version-997124 kubelet[665]: E0307 18:22:15.886669     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.608581  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:26 old-k8s-version-997124 kubelet[665]: E0307 18:22:26.695069     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 18:27:49.610321  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:37 old-k8s-version-997124 kubelet[665]: E0307 18:22:37.728719     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.610915  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:39 old-k8s-version-997124 kubelet[665]: E0307 18:22:39.022340     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.611239  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:40 old-k8s-version-997124 kubelet[665]: E0307 18:22:40.044128     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.612000  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:46 old-k8s-version-997124 kubelet[665]: E0307 18:22:46.056674     665 pod_workers.go:191] Error syncing pod 5e4df77f-71eb-4016-b5c4-5be9ff541504 ("storage-provisioner_kube-system(5e4df77f-71eb-4016-b5c4-5be9ff541504)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5e4df77f-71eb-4016-b5c4-5be9ff541504)"
	W0307 18:27:49.612322  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:47 old-k8s-version-997124 kubelet[665]: E0307 18:22:47.215781     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.614749  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:49 old-k8s-version-997124 kubelet[665]: E0307 18:22:49.693115     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 18:27:49.615787  481658 logs.go:138] Found kubelet problem: Mar 07 18:22:59 old-k8s-version-997124 kubelet[665]: E0307 18:22:59.090209     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.615969  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:03 old-k8s-version-997124 kubelet[665]: E0307 18:23:03.685838     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.616291  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:07 old-k8s-version-997124 kubelet[665]: E0307 18:23:07.216413     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.616472  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:14 old-k8s-version-997124 kubelet[665]: E0307 18:23:14.689214     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.617059  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:22 old-k8s-version-997124 kubelet[665]: E0307 18:23:22.154549     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.617240  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:25 old-k8s-version-997124 kubelet[665]: E0307 18:23:25.686505     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.617568  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:27 old-k8s-version-997124 kubelet[665]: E0307 18:23:27.216415     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.619980  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:39 old-k8s-version-997124 kubelet[665]: E0307 18:23:39.697016     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 18:27:49.620305  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:42 old-k8s-version-997124 kubelet[665]: E0307 18:23:42.685916     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.620490  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:50 old-k8s-version-997124 kubelet[665]: E0307 18:23:50.686144     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.620831  481658 logs.go:138] Found kubelet problem: Mar 07 18:23:57 old-k8s-version-997124 kubelet[665]: E0307 18:23:57.685609     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.621016  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:03 old-k8s-version-997124 kubelet[665]: E0307 18:24:03.697443     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.621607  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:12 old-k8s-version-997124 kubelet[665]: E0307 18:24:12.264977     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.621813  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:15 old-k8s-version-997124 kubelet[665]: E0307 18:24:15.685952     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.622143  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:17 old-k8s-version-997124 kubelet[665]: E0307 18:24:17.215724     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.622330  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:27 old-k8s-version-997124 kubelet[665]: E0307 18:24:27.685837     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.622657  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:30 old-k8s-version-997124 kubelet[665]: E0307 18:24:30.685953     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.622839  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:38 old-k8s-version-997124 kubelet[665]: E0307 18:24:38.686907     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.623162  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:41 old-k8s-version-997124 kubelet[665]: E0307 18:24:41.685425     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.623343  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:49 old-k8s-version-997124 kubelet[665]: E0307 18:24:49.685896     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.623666  481658 logs.go:138] Found kubelet problem: Mar 07 18:24:52 old-k8s-version-997124 kubelet[665]: E0307 18:24:52.685835     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.626311  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:03 old-k8s-version-997124 kubelet[665]: E0307 18:25:03.693908     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 18:27:49.626638  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:06 old-k8s-version-997124 kubelet[665]: E0307 18:25:06.686276     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.626821  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:15 old-k8s-version-997124 kubelet[665]: E0307 18:25:15.685946     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.627144  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:18 old-k8s-version-997124 kubelet[665]: E0307 18:25:18.686156     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.627467  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:29 old-k8s-version-997124 kubelet[665]: E0307 18:25:29.685472     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.627648  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:29 old-k8s-version-997124 kubelet[665]: E0307 18:25:29.686743     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.628230  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:42 old-k8s-version-997124 kubelet[665]: E0307 18:25:42.454082     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.628411  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:43 old-k8s-version-997124 kubelet[665]: E0307 18:25:43.686478     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.628734  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:47 old-k8s-version-997124 kubelet[665]: E0307 18:25:47.215963     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.628916  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:57 old-k8s-version-997124 kubelet[665]: E0307 18:25:57.685865     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.629239  481658 logs.go:138] Found kubelet problem: Mar 07 18:25:58 old-k8s-version-997124 kubelet[665]: E0307 18:25:58.685718     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.629419  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:09 old-k8s-version-997124 kubelet[665]: E0307 18:26:09.685929     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.629748  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:12 old-k8s-version-997124 kubelet[665]: E0307 18:26:12.685855     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.629933  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:22 old-k8s-version-997124 kubelet[665]: E0307 18:26:22.686312     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.630260  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:24 old-k8s-version-997124 kubelet[665]: E0307 18:26:24.686072     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.630442  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:35 old-k8s-version-997124 kubelet[665]: E0307 18:26:35.685894     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.630765  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:39 old-k8s-version-997124 kubelet[665]: E0307 18:26:39.685488     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.630947  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:48 old-k8s-version-997124 kubelet[665]: E0307 18:26:48.685970     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.631270  481658 logs.go:138] Found kubelet problem: Mar 07 18:26:50 old-k8s-version-997124 kubelet[665]: E0307 18:26:50.690172     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.631450  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:03 old-k8s-version-997124 kubelet[665]: E0307 18:27:03.685858     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.631774  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:05 old-k8s-version-997124 kubelet[665]: E0307 18:27:05.685818     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.631954  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:14 old-k8s-version-997124 kubelet[665]: E0307 18:27:14.685860     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.632279  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:19 old-k8s-version-997124 kubelet[665]: E0307 18:27:19.685511     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.632460  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:26 old-k8s-version-997124 kubelet[665]: E0307 18:27:26.685882     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.632782  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:34 old-k8s-version-997124 kubelet[665]: E0307 18:27:34.688803     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:49.632965  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:40 old-k8s-version-997124 kubelet[665]: E0307 18:27:40.685940     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:49.633287  481658 logs.go:138] Found kubelet problem: Mar 07 18:27:45 old-k8s-version-997124 kubelet[665]: E0307 18:27:45.685665     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	I0307 18:27:49.633297  481658 logs.go:123] Gathering logs for etcd [1b1b959631ce8b5df7028c1e7b6681c87e712d655c39b2aa198cd40b0b022d26] ...
	I0307 18:27:49.633311  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b1b959631ce8b5df7028c1e7b6681c87e712d655c39b2aa198cd40b0b022d26"
	I0307 18:27:49.681621  481658 logs.go:123] Gathering logs for kindnet [9a553e3c5421ce4d00dda3c529a447fc62543341feb6c6dc7f640f3bd283099c] ...
	I0307 18:27:49.681693  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a553e3c5421ce4d00dda3c529a447fc62543341feb6c6dc7f640f3bd283099c"
	I0307 18:27:49.737274  481658 logs.go:123] Gathering logs for containerd ...
	I0307 18:27:49.737310  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 18:27:49.802127  481658 logs.go:123] Gathering logs for container status ...
	I0307 18:27:49.802158  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 18:27:49.877318  481658 logs.go:123] Gathering logs for describe nodes ...
	I0307 18:27:49.877344  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 18:27:50.041403  481658 logs.go:123] Gathering logs for etcd [898c4689343ce6ee0040e24b455f19091e2321d35147dae9df221078f3939606] ...
	I0307 18:27:50.041436  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 898c4689343ce6ee0040e24b455f19091e2321d35147dae9df221078f3939606"
	I0307 18:27:50.097118  481658 logs.go:123] Gathering logs for kube-scheduler [1906536812451dae0209018696ef96e213b1944e0d50bcfd220f1a89dbb442d8] ...
	I0307 18:27:50.097148  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1906536812451dae0209018696ef96e213b1944e0d50bcfd220f1a89dbb442d8"
	I0307 18:27:50.142793  481658 logs.go:123] Gathering logs for kube-controller-manager [30875f06dae118448845b68fdfe08ca3d0edd4dfac7bf2be7539ee1b7a374d96] ...
	I0307 18:27:50.142823  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30875f06dae118448845b68fdfe08ca3d0edd4dfac7bf2be7539ee1b7a374d96"
	I0307 18:27:50.201961  481658 logs.go:123] Gathering logs for kubernetes-dashboard [8f97ed067438fc85708bc7a18384b09b2d3167a4c39a1c46965e253340d97921] ...
	I0307 18:27:50.201999  481658 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f97ed067438fc85708bc7a18384b09b2d3167a4c39a1c46965e253340d97921"
	I0307 18:27:50.248728  481658 out.go:304] Setting ErrFile to fd 2...
	I0307 18:27:50.248760  481658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 18:27:50.248821  481658 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 18:27:50.248838  481658 out.go:239]   Mar 07 18:27:19 old-k8s-version-997124 kubelet[665]: E0307 18:27:19.685511     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	  Mar 07 18:27:19 old-k8s-version-997124 kubelet[665]: E0307 18:27:19.685511     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:50.248845  481658 out.go:239]   Mar 07 18:27:26 old-k8s-version-997124 kubelet[665]: E0307 18:27:26.685882     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 18:27:26 old-k8s-version-997124 kubelet[665]: E0307 18:27:26.685882     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:50.248877  481658 out.go:239]   Mar 07 18:27:34 old-k8s-version-997124 kubelet[665]: E0307 18:27:34.688803     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	  Mar 07 18:27:34 old-k8s-version-997124 kubelet[665]: E0307 18:27:34.688803     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	W0307 18:27:50.248890  481658 out.go:239]   Mar 07 18:27:40 old-k8s-version-997124 kubelet[665]: E0307 18:27:40.685940     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 18:27:40 old-k8s-version-997124 kubelet[665]: E0307 18:27:40.685940     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 18:27:50.248896  481658 out.go:239]   Mar 07 18:27:45 old-k8s-version-997124 kubelet[665]: E0307 18:27:45.685665     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	  Mar 07 18:27:45 old-k8s-version-997124 kubelet[665]: E0307 18:27:45.685665     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	I0307 18:27:50.248932  481658 out.go:304] Setting ErrFile to fd 2...
	I0307 18:27:50.248953  481658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:28:00.250792  481658 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0307 18:28:00.295907  481658 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0307 18:28:00.307781  481658 out.go:177] 
	W0307 18:28:00.309733  481658 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0307 18:28:00.309778  481658 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0307 18:28:00.309878  481658 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0307 18:28:00.309888  481658 out.go:239] * 
	* 
	W0307 18:28:00.310985  481658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 18:28:00.317865  481658 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-997124 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-997124
helpers_test.go:235: (dbg) docker inspect old-k8s-version-997124:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "586c692bc52eb3a6205352039383d38d9c81fca1f2bd76eeb3b475ef8c7452c9",
	        "Created": "2024-03-07T18:18:55.318973685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 481841,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-07T18:21:46.744510762Z",
	            "FinishedAt": "2024-03-07T18:21:45.421964197Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/586c692bc52eb3a6205352039383d38d9c81fca1f2bd76eeb3b475ef8c7452c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/586c692bc52eb3a6205352039383d38d9c81fca1f2bd76eeb3b475ef8c7452c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/586c692bc52eb3a6205352039383d38d9c81fca1f2bd76eeb3b475ef8c7452c9/hosts",
	        "LogPath": "/var/lib/docker/containers/586c692bc52eb3a6205352039383d38d9c81fca1f2bd76eeb3b475ef8c7452c9/586c692bc52eb3a6205352039383d38d9c81fca1f2bd76eeb3b475ef8c7452c9-json.log",
	        "Name": "/old-k8s-version-997124",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-997124:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-997124",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bf648d997b620448dbe279393192a55dda8a4172c4c16472f477b8207abe736a-init/diff:/var/lib/docker/overlay2/0779a2b4023b2ef8823e4f754756b06078299f99078b3b2bb639a1812d9ff63d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf648d997b620448dbe279393192a55dda8a4172c4c16472f477b8207abe736a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf648d997b620448dbe279393192a55dda8a4172c4c16472f477b8207abe736a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf648d997b620448dbe279393192a55dda8a4172c4c16472f477b8207abe736a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-997124",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-997124/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-997124",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-997124",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-997124",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ea5ec05e668c53851094568e6dd5967084de7c5d783422ba1c72b392322d160",
	            "SandboxKey": "/var/run/docker/netns/1ea5ec05e668",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-997124": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "586c692bc52e",
	                        "old-k8s-version-997124"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "1880aaed57aa2897914d870a6bb54aa14b1903b636a525765654709f9f9733d5",
	                    "EndpointID": "8bfa07791bfefa0619128b8d1400e4147cc56d233d8a25ffe68d47c787ca68b0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-997124",
	                        "586c692bc52e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-997124 -n old-k8s-version-997124
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-997124 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-997124 logs -n 25: (2.800423511s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-773079                           | force-systemd-flag-773079 | jenkins | v1.32.0 | 07 Mar 24 18:17 UTC | 07 Mar 24 18:18 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-773079                              | force-systemd-flag-773079 | jenkins | v1.32.0 | 07 Mar 24 18:18 UTC | 07 Mar 24 18:18 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-773079                           | force-systemd-flag-773079 | jenkins | v1.32.0 | 07 Mar 24 18:18 UTC | 07 Mar 24 18:18 UTC |
	| start   | -p cert-options-849238                                 | cert-options-849238       | jenkins | v1.32.0 | 07 Mar 24 18:18 UTC | 07 Mar 24 18:18 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-849238 ssh                                | cert-options-849238       | jenkins | v1.32.0 | 07 Mar 24 18:18 UTC | 07 Mar 24 18:18 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-849238 -- sudo                         | cert-options-849238       | jenkins | v1.32.0 | 07 Mar 24 18:18 UTC | 07 Mar 24 18:18 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-849238                                 | cert-options-849238       | jenkins | v1.32.0 | 07 Mar 24 18:18 UTC | 07 Mar 24 18:18 UTC |
	| start   | -p old-k8s-version-997124                              | old-k8s-version-997124    | jenkins | v1.32.0 | 07 Mar 24 18:18 UTC | 07 Mar 24 18:21 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-733819                              | cert-expiration-733819    | jenkins | v1.32.0 | 07 Mar 24 18:20 UTC | 07 Mar 24 18:21 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-733819                              | cert-expiration-733819    | jenkins | v1.32.0 | 07 Mar 24 18:21 UTC | 07 Mar 24 18:21 UTC |
	| start   | -p no-preload-769637                                   | no-preload-769637         | jenkins | v1.32.0 | 07 Mar 24 18:21 UTC | 07 Mar 24 18:22 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-997124        | old-k8s-version-997124    | jenkins | v1.32.0 | 07 Mar 24 18:21 UTC | 07 Mar 24 18:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-997124                              | old-k8s-version-997124    | jenkins | v1.32.0 | 07 Mar 24 18:21 UTC | 07 Mar 24 18:21 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-997124             | old-k8s-version-997124    | jenkins | v1.32.0 | 07 Mar 24 18:21 UTC | 07 Mar 24 18:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-997124                              | old-k8s-version-997124    | jenkins | v1.32.0 | 07 Mar 24 18:21 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-769637             | no-preload-769637         | jenkins | v1.32.0 | 07 Mar 24 18:22 UTC | 07 Mar 24 18:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-769637                                   | no-preload-769637         | jenkins | v1.32.0 | 07 Mar 24 18:22 UTC | 07 Mar 24 18:22 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-769637                  | no-preload-769637         | jenkins | v1.32.0 | 07 Mar 24 18:22 UTC | 07 Mar 24 18:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-769637                                   | no-preload-769637         | jenkins | v1.32.0 | 07 Mar 24 18:22 UTC | 07 Mar 24 18:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| image   | no-preload-769637 image list                           | no-preload-769637         | jenkins | v1.32.0 | 07 Mar 24 18:27 UTC | 07 Mar 24 18:27 UTC |
	|         | --format=json                                          |                           |         |         |                     |                     |
	| pause   | -p no-preload-769637                                   | no-preload-769637         | jenkins | v1.32.0 | 07 Mar 24 18:27 UTC | 07 Mar 24 18:27 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| unpause | -p no-preload-769637                                   | no-preload-769637         | jenkins | v1.32.0 | 07 Mar 24 18:27 UTC | 07 Mar 24 18:27 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| delete  | -p no-preload-769637                                   | no-preload-769637         | jenkins | v1.32.0 | 07 Mar 24 18:27 UTC | 07 Mar 24 18:27 UTC |
	| delete  | -p no-preload-769637                                   | no-preload-769637         | jenkins | v1.32.0 | 07 Mar 24 18:27 UTC | 07 Mar 24 18:27 UTC |
	| start   | -p embed-certs-262201                                  | embed-certs-262201        | jenkins | v1.32.0 | 07 Mar 24 18:27 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 18:27:57
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 18:27:57.565387  492467 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:27:57.565628  492467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:27:57.565642  492467 out.go:304] Setting ErrFile to fd 2...
	I0307 18:27:57.565647  492467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:27:57.565898  492467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 18:27:57.566324  492467 out.go:298] Setting JSON to false
	I0307 18:27:57.567352  492467 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7822,"bootTime":1709828256,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 18:27:57.567431  492467 start.go:139] virtualization:  
	I0307 18:27:57.570387  492467 out.go:177] * [embed-certs-262201] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 18:27:57.573139  492467 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 18:27:57.575176  492467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:27:57.573230  492467 notify.go:220] Checking for updates...
	I0307 18:27:57.579289  492467 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 18:27:57.581488  492467 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	I0307 18:27:57.583682  492467 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 18:27:57.585673  492467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:27:57.588119  492467 config.go:182] Loaded profile config "old-k8s-version-997124": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 18:27:57.588233  492467 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:27:57.609583  492467 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 18:27:57.609706  492467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:27:57.687183  492467 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 18:27:57.67603407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:27:57.687294  492467 docker.go:295] overlay module found
	I0307 18:27:57.689463  492467 out.go:177] * Using the docker driver based on user configuration
	I0307 18:27:57.691423  492467 start.go:297] selected driver: docker
	I0307 18:27:57.691443  492467 start.go:901] validating driver "docker" against <nil>
	I0307 18:27:57.691457  492467 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:27:57.692113  492467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:27:57.757978  492467 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 18:27:57.748181046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:27:57.758142  492467 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 18:27:57.758372  492467 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 18:27:57.760288  492467 out.go:177] * Using Docker driver with root privileges
	I0307 18:27:57.762278  492467 cni.go:84] Creating CNI manager for ""
	I0307 18:27:57.762299  492467 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 18:27:57.762314  492467 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 18:27:57.762395  492467 start.go:340] cluster config:
	{Name:embed-certs-262201 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-262201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 18:27:57.764725  492467 out.go:177] * Starting "embed-certs-262201" primary control-plane node in "embed-certs-262201" cluster
	I0307 18:27:57.766521  492467 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 18:27:57.768531  492467 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 18:27:57.770189  492467 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 18:27:57.770241  492467 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0307 18:27:57.770260  492467 cache.go:56] Caching tarball of preloaded images
	I0307 18:27:57.770274  492467 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 18:27:57.770336  492467 preload.go:173] Found /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 18:27:57.770346  492467 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0307 18:27:57.770454  492467 profile.go:142] Saving config to /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/embed-certs-262201/config.json ...
	I0307 18:27:57.770475  492467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/embed-certs-262201/config.json: {Name:mkf144566eccd34d9f833698b6320d5c887bd013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 18:27:57.787457  492467 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 18:27:57.787482  492467 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 18:27:57.787530  492467 cache.go:194] Successfully downloaded all kic artifacts
	I0307 18:27:57.787560  492467 start.go:360] acquireMachinesLock for embed-certs-262201: {Name:mk4362269783451cb0b1d2f5702aac4586dedec7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 18:27:57.788070  492467 start.go:364] duration metric: took 490.023µs to acquireMachinesLock for "embed-certs-262201"
	I0307 18:27:57.788115  492467 start.go:93] Provisioning new machine with config: &{Name:embed-certs-262201 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-262201 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 18:27:57.788191  492467 start.go:125] createHost starting for "" (driver="docker")
	I0307 18:28:00.250792  481658 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0307 18:28:00.295907  481658 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0307 18:28:00.307781  481658 out.go:177] 
	W0307 18:28:00.309733  481658 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0307 18:28:00.309778  481658 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0307 18:28:00.309878  481658 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0307 18:28:00.309888  481658 out.go:239] * 
	W0307 18:28:00.310985  481658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 18:28:00.317865  481658 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	8d80c2ce4b533       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   c1d5bd35f738c       dashboard-metrics-scraper-8d5bb5db8-t28kj
	be3446c2bfd02       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   d8abbe69c2473       storage-provisioner
	8f97ed067438f       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   39db4a6478991       kubernetes-dashboard-cd95d586-k57m5
	c1961fe5a9e0b       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   b4655f77f40d3       busybox
	770cf84d962d4       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   fdbdd94066bf4       kube-proxy-vpvtl
	ab24de38e7a12       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   d8abbe69c2473       storage-provisioner
	17ab22ea05443       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   dadf9a57440b9       coredns-74ff55c5b-5sdfx
	9a553e3c5421c       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   7861b5359fbf6       kindnet-ldhsp
	30875f06dae11       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   0e926fb25a737       kube-controller-manager-old-k8s-version-997124
	c1cb177381c7a       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   9fd53d5440966       kube-apiserver-old-k8s-version-997124
	1b1b959631ce8       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   bd2814fb04980       etcd-old-k8s-version-997124
	e2dabd79971ed       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   54f06555f08e2       kube-scheduler-old-k8s-version-997124
	73049599ac79b       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   dd62d10e9d696       busybox
	b86e52eeb2236       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   43725ce7ddd0a       coredns-74ff55c5b-5sdfx
	c9136646b5122       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   e4f1da18a35f5       kube-proxy-vpvtl
	c29a3ecc76fa7       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   2d6a263bf2fd2       kindnet-ldhsp
	898c4689343ce       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   2609334aa32bf       etcd-old-k8s-version-997124
	06457ed3f3624       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   c1f2ac360bab6       kube-apiserver-old-k8s-version-997124
	3e264475e0b83       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   4c19e2e779024       kube-controller-manager-old-k8s-version-997124
	1906536812451       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   e8b93b33542ce       kube-scheduler-old-k8s-version-997124
	
	
	==> containerd <==
	Mar 07 18:24:11 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:24:11.707425152Z" level=info msg="CreateContainer within sandbox \"c1d5bd35f738c100d01da16b90d44ab47d453f22198e2f38b68e71ec0fbd41be\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"faa77065941560763ade7af7a19f7ec0b821719d2ed46a4655b2731b358edce2\""
	Mar 07 18:24:11 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:24:11.708043731Z" level=info msg="StartContainer for \"faa77065941560763ade7af7a19f7ec0b821719d2ed46a4655b2731b358edce2\""
	Mar 07 18:24:11 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:24:11.775054813Z" level=info msg="StartContainer for \"faa77065941560763ade7af7a19f7ec0b821719d2ed46a4655b2731b358edce2\" returns successfully"
	Mar 07 18:24:11 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:24:11.800070306Z" level=info msg="shim disconnected" id=faa77065941560763ade7af7a19f7ec0b821719d2ed46a4655b2731b358edce2
	Mar 07 18:24:11 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:24:11.800122916Z" level=warning msg="cleaning up after shim disconnected" id=faa77065941560763ade7af7a19f7ec0b821719d2ed46a4655b2731b358edce2 namespace=k8s.io
	Mar 07 18:24:11 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:24:11.800135962Z" level=info msg="cleaning up dead shim"
	Mar 07 18:24:11 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:24:11.818157873Z" level=warning msg="cleanup warnings time=\"2024-03-07T18:24:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2970 runtime=io.containerd.runc.v2\n"
	Mar 07 18:24:12 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:24:12.266376681Z" level=info msg="RemoveContainer for \"c4cb706b79a0ea385ebdf59d6b13935d883a7cfeccea216aaaeeda9982f32a1c\""
	Mar 07 18:24:12 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:24:12.273403370Z" level=info msg="RemoveContainer for \"c4cb706b79a0ea385ebdf59d6b13935d883a7cfeccea216aaaeeda9982f32a1c\" returns successfully"
	Mar 07 18:25:03 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:03.686491877Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 18:25:03 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:03.691194976Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 07 18:25:03 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:03.693132552Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 07 18:25:41 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:41.687636052Z" level=info msg="CreateContainer within sandbox \"c1d5bd35f738c100d01da16b90d44ab47d453f22198e2f38b68e71ec0fbd41be\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 07 18:25:41 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:41.709745719Z" level=info msg="CreateContainer within sandbox \"c1d5bd35f738c100d01da16b90d44ab47d453f22198e2f38b68e71ec0fbd41be\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f\""
	Mar 07 18:25:41 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:41.712939876Z" level=info msg="StartContainer for \"8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f\""
	Mar 07 18:25:41 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:41.781238708Z" level=info msg="StartContainer for \"8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f\" returns successfully"
	Mar 07 18:25:41 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:41.804046765Z" level=info msg="shim disconnected" id=8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f
	Mar 07 18:25:41 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:41.804105883Z" level=warning msg="cleaning up after shim disconnected" id=8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f namespace=k8s.io
	Mar 07 18:25:41 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:41.804117534Z" level=info msg="cleaning up dead shim"
	Mar 07 18:25:41 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:41.814683379Z" level=warning msg="cleanup warnings time=\"2024-03-07T18:25:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3223 runtime=io.containerd.runc.v2\n"
	Mar 07 18:25:42 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:42.460341329Z" level=info msg="RemoveContainer for \"faa77065941560763ade7af7a19f7ec0b821719d2ed46a4655b2731b358edce2\""
	Mar 07 18:25:42 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:25:42.466382633Z" level=info msg="RemoveContainer for \"faa77065941560763ade7af7a19f7ec0b821719d2ed46a4655b2731b358edce2\" returns successfully"
	Mar 07 18:27:52 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:27:52.686736368Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 18:27:52 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:27:52.705435016Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 07 18:27:52 old-k8s-version-997124 containerd[569]: time="2024-03-07T18:27:52.707186289Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> coredns [17ab22ea054436fa3fdef5c62c348afd1cde0410fcfc9ffb077296127319e4c6] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:47440 - 13461 "HINFO IN 3944348801820263024.7808816708237427918. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0260837s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0307 18:22:45.807612       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-07 18:22:15.806878893 +0000 UTC m=+0.036059621) (total time: 30.000635495s):
	Trace[2019727887]: [30.000635495s] [30.000635495s] END
	E0307 18:22:45.807719       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0307 18:22:45.809325       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-07 18:22:15.80857062 +0000 UTC m=+0.037751348) (total time: 30.000733119s):
	Trace[939984059]: [30.000733119s] [30.000733119s] END
	E0307 18:22:45.809468       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0307 18:22:45.809844       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-07 18:22:15.809263816 +0000 UTC m=+0.038444552) (total time: 30.000566811s):
	Trace[1474941318]: [30.000566811s] [30.000566811s] END
	E0307 18:22:45.809959       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [b86e52eeb22361c7a14cfa1916f843f00c986ba2b7051069ca3d1a74ad703dff] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:38335 - 34198 "HINFO IN 1745621759464150620.4475964859014872115. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018938515s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-997124
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-997124
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c2be9b5c96c7962d271300acacf405d2402b272f
	                    minikube.k8s.io/name=old-k8s-version-997124
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T18_19_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 18:19:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-997124
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 18:27:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 18:23:03 +0000   Thu, 07 Mar 2024 18:19:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 18:23:03 +0000   Thu, 07 Mar 2024 18:19:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 18:23:03 +0000   Thu, 07 Mar 2024 18:19:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 18:23:03 +0000   Thu, 07 Mar 2024 18:19:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-997124
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2678b44ad97442c96cc2bbfee9bf816
	  System UUID:                ce3c8e0e-5d5c-4804-a32a-34ed341b3c82
	  Boot ID:                    a949ea88-4a69-4ab0-89c5-986450203265
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	  kube-system                 coredns-74ff55c5b-5sdfx                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m13s
	  kube-system                 etcd-old-k8s-version-997124                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m20s
	  kube-system                 kindnet-ldhsp                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m13s
	  kube-system                 kube-apiserver-old-k8s-version-997124             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-controller-manager-old-k8s-version-997124    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-proxy-vpvtl                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-scheduler-old-k8s-version-997124             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 metrics-server-9975d5f86-5lkvw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m29s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-t28kj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-k57m5               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m40s (x5 over 8m40s)  kubelet     Node old-k8s-version-997124 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m40s (x5 over 8m40s)  kubelet     Node old-k8s-version-997124 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m40s (x5 over 8m40s)  kubelet     Node old-k8s-version-997124 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m20s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m20s                  kubelet     Node old-k8s-version-997124 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m20s                  kubelet     Node old-k8s-version-997124 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m20s                  kubelet     Node old-k8s-version-997124 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m20s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m13s                  kubelet     Node old-k8s-version-997124 status is now: NodeReady
	  Normal  Starting                 8m11s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m)        kubelet     Node old-k8s-version-997124 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)        kubelet     Node old-k8s-version-997124 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x7 over 6m)        kubelet     Node old-k8s-version-997124 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m                     kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m46s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000767] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001007] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=000000003a76bb30
	[  +0.001126] FS-Cache: N-key=[8] 'df3a5c0100000000'
	[  +0.006419] FS-Cache: Duplicate cookie detected
	[  +0.000764] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=00000000fd4f0320
	[  +0.001144] FS-Cache: O-key=[8] 'df3a5c0100000000'
	[  +0.000799] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001002] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=00000000776f6318
	[  +0.001070] FS-Cache: N-key=[8] 'df3a5c0100000000'
	[  +1.794085] FS-Cache: Duplicate cookie detected
	[  +0.000796] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001000] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=000000002bfb4aa6
	[  +0.001203] FS-Cache: O-key=[8] 'de3a5c0100000000'
	[  +0.000769] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001023] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=00000000374e2f34
	[  +0.001183] FS-Cache: N-key=[8] 'de3a5c0100000000'
	[  +0.333056] FS-Cache: Duplicate cookie detected
	[  +0.000726] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001001] FS-Cache: O-cookie d=0000000083f1d117{9p.inode} n=000000007434b39f
	[  +0.001097] FS-Cache: O-key=[8] 'e43a5c0100000000'
	[  +0.000794] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=0000000083f1d117{9p.inode} n=000000003a76bb30
	[  +0.001095] FS-Cache: N-key=[8] 'e43a5c0100000000'
	[Mar 7 18:16] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/27/fs': -2
	
	
	==> etcd [1b1b959631ce8b5df7028c1e7b6681c87e712d655c39b2aa198cd40b0b022d26] <==
	2024-03-07 18:23:59.336553 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:24:09.337685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:24:19.336459 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:24:29.336820 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:24:39.336485 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:24:49.337998 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:24:59.336477 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:25:09.336517 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:25:19.336635 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:25:29.336638 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:25:39.336612 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:25:49.336604 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:25:59.336506 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:26:09.340600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:26:19.336557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:26:29.336815 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:26:39.336478 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:26:49.336650 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:26:59.336516 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:27:09.336515 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:27:19.336516 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:27:29.336724 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:27:39.336620 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:27:49.336624 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:27:59.336555 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [898c4689343ce6ee0040e24b455f19091e2321d35147dae9df221078f3939606] <==
	raft2024/03/07 18:19:24 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/03/07 18:19:24 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/03/07 18:19:24 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/03/07 18:19:24 INFO: ea7e25599daad906 became leader at term 2
	raft2024/03/07 18:19:24 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-03-07 18:19:24.052584 I | etcdserver: published {Name:old-k8s-version-997124 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-03-07 18:19:24.053000 I | embed: ready to serve client requests
	2024-03-07 18:19:24.054947 I | embed: serving client requests on 192.168.76.2:2379
	2024-03-07 18:19:24.055164 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-07 18:19:24.055598 I | embed: ready to serve client requests
	2024-03-07 18:19:24.061163 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-07 18:19:24.084841 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-07 18:19:24.085162 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-07 18:19:48.876586 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:19:51.646777 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:20:01.646850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:20:11.646787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:20:21.646858 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:20:31.646734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:20:41.646757 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:20:51.646752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:21:01.647461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:21:11.647172 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:21:21.646787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 18:21:31.646849 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:28:02 up  2:10,  0 users,  load average: 0.73, 1.52, 2.18
	Linux old-k8s-version-997124 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [9a553e3c5421ce4d00dda3c529a447fc62543341feb6c6dc7f640f3bd283099c] <==
	I0307 18:25:56.153260       1 main.go:227] handling current node
	I0307 18:26:06.171900       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:26:06.172082       1 main.go:227] handling current node
	I0307 18:26:16.181951       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:26:16.181981       1 main.go:227] handling current node
	I0307 18:26:26.195362       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:26:26.195388       1 main.go:227] handling current node
	I0307 18:26:36.210625       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:26:36.210654       1 main.go:227] handling current node
	I0307 18:26:46.222669       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:26:46.222697       1 main.go:227] handling current node
	I0307 18:26:56.238959       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:26:56.238985       1 main.go:227] handling current node
	I0307 18:27:06.249881       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:27:06.250191       1 main.go:227] handling current node
	I0307 18:27:16.261113       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:27:16.261140       1 main.go:227] handling current node
	I0307 18:27:26.272983       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:27:26.273010       1 main.go:227] handling current node
	I0307 18:27:36.289706       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:27:36.289739       1 main.go:227] handling current node
	I0307 18:27:46.307230       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:27:46.307261       1 main.go:227] handling current node
	I0307 18:27:56.324811       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:27:56.324896       1 main.go:227] handling current node
	
	
	==> kindnet [c29a3ecc76fa7bd93fbae6f7b38e8541e22835ee5cc9c57b6a9915d68e0b18b8] <==
	I0307 18:19:50.760122       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0307 18:19:50.760183       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0307 18:19:50.760292       1 main.go:116] setting mtu 1500 for CNI 
	I0307 18:19:50.760301       1 main.go:146] kindnetd IP family: "ipv4"
	I0307 18:19:50.760312       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0307 18:20:20.957846       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0307 18:20:20.971524       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:20:20.971554       1 main.go:227] handling current node
	I0307 18:20:30.986376       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:20:30.986402       1 main.go:227] handling current node
	I0307 18:20:40.999667       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:20:40.999704       1 main.go:227] handling current node
	I0307 18:20:51.010345       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:20:51.010376       1 main.go:227] handling current node
	I0307 18:21:01.027537       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:21:01.027568       1 main.go:227] handling current node
	I0307 18:21:11.135523       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:21:11.135551       1 main.go:227] handling current node
	I0307 18:21:21.213483       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:21:21.213512       1 main.go:227] handling current node
	I0307 18:21:31.289842       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 18:21:31.289868       1 main.go:227] handling current node
	
	
	==> kube-apiserver [06457ed3f36245940d9d2141a131e202fed1b09fa32b05c05cc9590af8e2d636] <==
	I0307 18:19:31.110062       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0307 18:19:31.110093       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0307 18:19:31.140176       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0307 18:19:31.147398       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0307 18:19:31.147711       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0307 18:19:31.598650       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 18:19:31.660105       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0307 18:19:31.742914       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0307 18:19:31.744017       1 controller.go:606] quota admission added evaluator for: endpoints
	I0307 18:19:31.749825       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 18:19:32.796007       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0307 18:19:33.553904       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0307 18:19:33.645419       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0307 18:19:42.073243       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0307 18:19:49.747339       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0307 18:19:49.828222       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0307 18:20:05.929681       1 client.go:360] parsed scheme: "passthrough"
	I0307 18:20:05.929921       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 18:20:05.930021       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 18:20:37.402407       1 client.go:360] parsed scheme: "passthrough"
	I0307 18:20:37.402454       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 18:20:37.402463       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 18:21:20.902720       1 client.go:360] parsed scheme: "passthrough"
	I0307 18:21:20.902782       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 18:21:20.902792       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [c1cb177381c7a688769b003a1d26b5ee0fd7127d1123189c47031eac05e7f152] <==
	I0307 18:24:40.102307       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 18:24:40.102317       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0307 18:25:16.400836       1 handler_proxy.go:102] no RequestInfo found in the context
	E0307 18:25:16.401035       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0307 18:25:16.401052       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0307 18:25:18.274413       1 client.go:360] parsed scheme: "passthrough"
	I0307 18:25:18.274506       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 18:25:18.274524       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 18:25:50.512133       1 client.go:360] parsed scheme: "passthrough"
	I0307 18:25:50.512176       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 18:25:50.512184       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 18:26:30.100664       1 client.go:360] parsed scheme: "passthrough"
	I0307 18:26:30.100715       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 18:26:30.100726       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 18:27:03.368145       1 client.go:360] parsed scheme: "passthrough"
	I0307 18:27:03.368190       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 18:27:03.368284       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0307 18:27:13.191875       1 handler_proxy.go:102] no RequestInfo found in the context
	E0307 18:27:13.191977       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0307 18:27:13.191994       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0307 18:27:33.468926       1 client.go:360] parsed scheme: "passthrough"
	I0307 18:27:33.468969       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 18:27:33.468978       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [30875f06dae118448845b68fdfe08ca3d0edd4dfac7bf2be7539ee1b7a374d96] <==
	W0307 18:23:39.628614       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 18:24:04.722939       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 18:24:11.279042       1 request.go:655] Throttling request took 1.048355371s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0307 18:24:12.130583       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 18:24:35.224702       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 18:24:43.781039       1 request.go:655] Throttling request took 1.04853717s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0307 18:24:44.632485       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 18:25:05.726711       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 18:25:16.283075       1 request.go:655] Throttling request took 1.048490273s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0307 18:25:17.134625       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 18:25:36.228589       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 18:25:48.785139       1 request.go:655] Throttling request took 1.048416104s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1?timeout=32s
	W0307 18:25:49.636930       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 18:26:06.730411       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 18:26:21.287362       1 request.go:655] Throttling request took 1.048542265s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W0307 18:26:22.138908       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 18:26:37.232257       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 18:26:53.789427       1 request.go:655] Throttling request took 1.048465946s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 18:26:54.640730       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 18:27:07.734407       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 18:27:26.291244       1 request.go:655] Throttling request took 1.048023094s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 18:27:27.142824       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 18:27:38.236631       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 18:27:58.793305       1 request.go:655] Throttling request took 1.048105239s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 18:27:59.644923       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [3e264475e0b83d78ac470f436dffd33ec856a060d62cc1c77f91d70bc91a2153] <==
	I0307 18:19:49.796007       1 shared_informer.go:247] Caches are synced for GC 
	I0307 18:19:49.817783       1 range_allocator.go:373] Set node old-k8s-version-997124 PodCIDR to [10.244.0.0/24]
	I0307 18:19:49.829173       1 shared_informer.go:247] Caches are synced for disruption 
	I0307 18:19:49.830115       1 disruption.go:339] Sending events to api server.
	I0307 18:19:49.830816       1 shared_informer.go:247] Caches are synced for endpoint 
	I0307 18:19:49.878825       1 shared_informer.go:247] Caches are synced for resource quota 
	I0307 18:19:49.879179       1 shared_informer.go:247] Caches are synced for taint 
	I0307 18:19:49.879787       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0307 18:19:49.879975       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-997124. Assuming now as a timestamp.
	I0307 18:19:49.880055       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0307 18:19:49.880165       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0307 18:19:49.880533       1 event.go:291] "Event occurred" object="old-k8s-version-997124" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-997124 event: Registered Node old-k8s-version-997124 in Controller"
	I0307 18:19:49.887691       1 shared_informer.go:247] Caches are synced for resource quota 
	I0307 18:19:49.900926       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5sdfx"
	I0307 18:19:49.940586       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-997124" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0307 18:19:49.940846       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vpvtl"
	I0307 18:19:50.007258       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ldhsp"
	I0307 18:19:50.055163       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0307 18:19:50.371596       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0307 18:19:50.378966       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0307 18:19:50.378986       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0307 18:19:51.283765       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0307 18:19:51.312121       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-4gh2t"
	I0307 18:19:54.880520       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0307 18:21:32.799956       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-proxy [770cf84d962d4a1cc05b3c05c8369e1291bce1e0e4c7957471886372dcac7a95] <==
	I0307 18:22:15.987866       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0307 18:22:15.987943       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0307 18:22:16.011065       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0307 18:22:16.011267       1 server_others.go:185] Using iptables Proxier.
	I0307 18:22:16.011615       1 server.go:650] Version: v1.20.0
	I0307 18:22:16.012267       1 config.go:315] Starting service config controller
	I0307 18:22:16.012332       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0307 18:22:16.012369       1 config.go:224] Starting endpoint slice config controller
	I0307 18:22:16.014447       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0307 18:22:16.112498       1 shared_informer.go:247] Caches are synced for service config 
	I0307 18:22:16.114998       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [c9136646b51223df919f28ab5f2cdd019c98b7e6ceef00928f0b655c841d52fd] <==
	I0307 18:19:51.023621       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0307 18:19:51.023916       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0307 18:19:51.060268       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0307 18:19:51.060421       1 server_others.go:185] Using iptables Proxier.
	I0307 18:19:51.060890       1 server.go:650] Version: v1.20.0
	I0307 18:19:51.061805       1 config.go:315] Starting service config controller
	I0307 18:19:51.061816       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0307 18:19:51.061851       1 config.go:224] Starting endpoint slice config controller
	I0307 18:19:51.061856       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0307 18:19:51.161926       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0307 18:19:51.162000       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [1906536812451dae0209018696ef96e213b1944e0d50bcfd220f1a89dbb442d8] <==
	W0307 18:19:30.241936       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0307 18:19:30.242138       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 18:19:30.242230       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 18:19:30.242325       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 18:19:30.370344       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0307 18:19:30.370637       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 18:19:30.370712       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 18:19:30.370816       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0307 18:19:30.396675       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 18:19:30.396984       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 18:19:30.397190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 18:19:30.397407       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 18:19:30.397612       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 18:19:30.397872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 18:19:30.397996       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 18:19:30.398088       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 18:19:30.398161       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 18:19:30.398229       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 18:19:30.398358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 18:19:30.414355       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0307 18:19:31.286250       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 18:19:31.286345       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 18:19:31.312501       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 18:19:31.319699       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0307 18:19:32.070855       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [e2dabd79971edca60d2b8d41e3b8da7866545ebce6a604ca0d4b7e7199c2d7ee] <==
	I0307 18:22:04.861161       1 serving.go:331] Generated self-signed cert in-memory
	W0307 18:22:12.098146       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0307 18:22:12.098173       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 18:22:12.098187       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 18:22:12.098208       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 18:22:12.332439       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0307 18:22:12.346272       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 18:22:12.346456       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 18:22:12.346550       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0307 18:22:12.546786       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 07 18:26:24 old-k8s-version-997124 kubelet[665]: E0307 18:26:24.686072     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	Mar 07 18:26:35 old-k8s-version-997124 kubelet[665]: E0307 18:26:35.685894     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 18:26:39 old-k8s-version-997124 kubelet[665]: I0307 18:26:39.685114     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f
	Mar 07 18:26:39 old-k8s-version-997124 kubelet[665]: E0307 18:26:39.685488     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	Mar 07 18:26:48 old-k8s-version-997124 kubelet[665]: E0307 18:26:48.685970     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 18:26:50 old-k8s-version-997124 kubelet[665]: I0307 18:26:50.689646     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f
	Mar 07 18:26:50 old-k8s-version-997124 kubelet[665]: E0307 18:26:50.690172     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	Mar 07 18:27:03 old-k8s-version-997124 kubelet[665]: E0307 18:27:03.685858     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 18:27:05 old-k8s-version-997124 kubelet[665]: I0307 18:27:05.685206     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f
	Mar 07 18:27:05 old-k8s-version-997124 kubelet[665]: E0307 18:27:05.685818     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	Mar 07 18:27:14 old-k8s-version-997124 kubelet[665]: E0307 18:27:14.685860     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 18:27:19 old-k8s-version-997124 kubelet[665]: I0307 18:27:19.685158     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f
	Mar 07 18:27:19 old-k8s-version-997124 kubelet[665]: E0307 18:27:19.685511     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	Mar 07 18:27:26 old-k8s-version-997124 kubelet[665]: E0307 18:27:26.685882     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 18:27:34 old-k8s-version-997124 kubelet[665]: I0307 18:27:34.687793     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f
	Mar 07 18:27:34 old-k8s-version-997124 kubelet[665]: E0307 18:27:34.688803     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	Mar 07 18:27:40 old-k8s-version-997124 kubelet[665]: E0307 18:27:40.685940     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 18:27:45 old-k8s-version-997124 kubelet[665]: I0307 18:27:45.685198     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f
	Mar 07 18:27:45 old-k8s-version-997124 kubelet[665]: E0307 18:27:45.685665     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	Mar 07 18:27:52 old-k8s-version-997124 kubelet[665]: E0307 18:27:52.707582     665 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 07 18:27:52 old-k8s-version-997124 kubelet[665]: E0307 18:27:52.707988     665 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 07 18:27:52 old-k8s-version-997124 kubelet[665]: E0307 18:27:52.708193     665 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-ftzs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-5lkvw_kube-system(63a987d
5-4c83-4cc7-bd5e-d7d48b287dcc): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 07 18:27:52 old-k8s-version-997124 kubelet[665]: E0307 18:27:52.708376     665 pod_workers.go:191] Error syncing pod 63a987d5-4c83-4cc7-bd5e-d7d48b287dcc ("metrics-server-9975d5f86-5lkvw_kube-system(63a987d5-4c83-4cc7-bd5e-d7d48b287dcc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 07 18:27:59 old-k8s-version-997124 kubelet[665]: I0307 18:27:59.685125     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8d80c2ce4b5338fe23d6821f89ab1961aa8b343691b3f6036cc85a6644ecc64f
	Mar 07 18:27:59 old-k8s-version-997124 kubelet[665]: E0307 18:27:59.685487     665 pod_workers.go:191] Error syncing pod 15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b ("dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-t28kj_kubernetes-dashboard(15673d5a-ee71-4d71-83a7-ca2aa9dcaf2b)"
	
	
	==> kubernetes-dashboard [8f97ed067438fc85708bc7a18384b09b2d3167a4c39a1c46965e253340d97921] <==
	2024/03/07 18:22:41 Starting overwatch
	2024/03/07 18:22:41 Using namespace: kubernetes-dashboard
	2024/03/07 18:22:41 Using in-cluster config to connect to apiserver
	2024/03/07 18:22:41 Using secret token for csrf signing
	2024/03/07 18:22:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/07 18:22:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/07 18:22:41 Successful initial request to the apiserver, version: v1.20.0
	2024/03/07 18:22:41 Generating JWE encryption key
	2024/03/07 18:22:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/07 18:22:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/07 18:22:41 Initializing JWE encryption key from synchronized object
	2024/03/07 18:22:41 Creating in-cluster Sidecar client
	2024/03/07 18:22:41 Serving insecurely on HTTP port: 9090
	2024/03/07 18:22:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 18:23:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 18:23:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 18:24:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 18:24:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 18:25:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 18:25:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 18:26:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 18:26:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 18:27:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 18:27:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [ab24de38e7a1226293b9ecafb0709a5690ff94759670b44570f77449e8b93d8d] <==
	I0307 18:22:15.902171       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0307 18:22:45.904503       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [be3446c2bfd0292db840ea5a2a17510480729b9c274061d29363e74891207ca0] <==
	I0307 18:22:56.912440       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 18:22:56.934384       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 18:22:56.934609       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 18:23:14.403258       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 18:23:14.403634       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-997124_32729175-3215-4b9f-b5f6-2d50c56b7054!
	I0307 18:23:14.404415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aba3f0a4-0f29-4849-979b-b390f4417e19", APIVersion:"v1", ResourceVersion:"852", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-997124_32729175-3215-4b9f-b5f6-2d50c56b7054 became leader
	I0307 18:23:14.504594       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-997124_32729175-3215-4b9f-b5f6-2d50c56b7054!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-997124 -n old-k8s-version-997124
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-997124 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-5lkvw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-997124 describe pod metrics-server-9975d5f86-5lkvw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-997124 describe pod metrics-server-9975d5f86-5lkvw: exit status 1 (93.935759ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-5lkvw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-997124 describe pod metrics-server-9975d5f86-5lkvw: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (378.30s)

                                                
                                    

Test pass (297/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.65
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 9.63
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.22
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 9.54
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.19
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.35
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.25
30 TestBinaryMirror 0.55
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 139.15
38 TestAddons/parallel/Registry 14.71
40 TestAddons/parallel/InspektorGadget 10.8
41 TestAddons/parallel/MetricsServer 6.85
44 TestAddons/parallel/CSI 44.59
45 TestAddons/parallel/Headlamp 11.55
46 TestAddons/parallel/CloudSpanner 5.57
47 TestAddons/parallel/LocalPath 53.36
48 TestAddons/parallel/NvidiaDevicePlugin 5.54
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.24
54 TestCertOptions 35.47
55 TestCertExpiration 225.6
57 TestForceSystemdFlag 43.62
58 TestForceSystemdEnv 37.47
59 TestDockerEnvContainerd 45.37
64 TestErrorSpam/setup 31.16
65 TestErrorSpam/start 0.73
66 TestErrorSpam/status 1.01
67 TestErrorSpam/pause 1.71
68 TestErrorSpam/unpause 1.84
69 TestErrorSpam/stop 1.47
72 TestFunctional/serial/CopySyncFile 0.01
73 TestFunctional/serial/StartWithProxy 60.72
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 5.98
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.99
81 TestFunctional/serial/CacheCmd/cache/add_local 1.48
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.16
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
89 TestFunctional/serial/ExtraConfig 45.51
90 TestFunctional/serial/ComponentHealth 0.09
91 TestFunctional/serial/LogsCmd 1.71
92 TestFunctional/serial/LogsFileCmd 1.73
93 TestFunctional/serial/InvalidService 4.5
95 TestFunctional/parallel/ConfigCmd 0.54
96 TestFunctional/parallel/DashboardCmd 13.18
97 TestFunctional/parallel/DryRun 0.55
98 TestFunctional/parallel/InternationalLanguage 0.28
99 TestFunctional/parallel/StatusCmd 1.08
103 TestFunctional/parallel/ServiceCmdConnect 10.67
104 TestFunctional/parallel/AddonsCmd 0.22
105 TestFunctional/parallel/PersistentVolumeClaim 26.23
107 TestFunctional/parallel/SSHCmd 0.69
108 TestFunctional/parallel/CpCmd 2.5
110 TestFunctional/parallel/FileSync 0.43
111 TestFunctional/parallel/CertSync 2.15
115 TestFunctional/parallel/NodeLabels 0.12
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
119 TestFunctional/parallel/License 0.32
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.55
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 8.24
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
133 TestFunctional/parallel/ProfileCmd/profile_list 0.42
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
135 TestFunctional/parallel/ServiceCmd/List 0.61
136 TestFunctional/parallel/MountCmd/any-port 6.64
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
139 TestFunctional/parallel/ServiceCmd/Format 0.51
140 TestFunctional/parallel/ServiceCmd/URL 0.54
141 TestFunctional/parallel/MountCmd/specific-port 1.89
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.29
143 TestFunctional/parallel/Version/short 0.14
144 TestFunctional/parallel/Version/components 1.56
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.68
150 TestFunctional/parallel/ImageCommands/Setup 2.34
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
161 TestFunctional/delete_addon-resizer_images 0.09
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestMutliControlPlane/serial/StartCluster 130.89
168 TestMutliControlPlane/serial/DeployApp 34.63
169 TestMutliControlPlane/serial/PingHostFromPods 1.7
170 TestMutliControlPlane/serial/AddWorkerNode 25.24
171 TestMutliControlPlane/serial/NodeLabels 0.13
172 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.81
173 TestMutliControlPlane/serial/CopyFile 19.7
174 TestMutliControlPlane/serial/StopSecondaryNode 12.85
175 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
176 TestMutliControlPlane/serial/RestartSecondaryNode 18.25
177 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.8
178 TestMutliControlPlane/serial/RestartClusterKeepsNodes 143.78
179 TestMutliControlPlane/serial/DeleteSecondaryNode 10.46
180 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
181 TestMutliControlPlane/serial/StopCluster 36.04
182 TestMutliControlPlane/serial/RestartCluster 46.82
183 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.6
184 TestMutliControlPlane/serial/AddSecondaryNode 45.76
185 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.76
189 TestJSONOutput/start/Command 58.24
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.74
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.66
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.78
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.25
214 TestKicCustomNetwork/create_custom_network 41.27
215 TestKicCustomNetwork/use_default_bridge_network 35.56
216 TestKicExistingNetwork 36.23
217 TestKicCustomSubnet 32.41
218 TestKicStaticIP 35.72
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 69.51
223 TestMountStart/serial/StartWithMountFirst 6.43
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 7.86
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.61
228 TestMountStart/serial/VerifyMountPostDelete 0.27
229 TestMountStart/serial/Stop 1.2
230 TestMountStart/serial/RestartStopped 8.21
231 TestMountStart/serial/VerifyMountPostStop 0.31
234 TestMultiNode/serial/FreshStart2Nodes 76.27
235 TestMultiNode/serial/DeployApp2Nodes 8.83
236 TestMultiNode/serial/PingHostFrom2Pods 1.04
237 TestMultiNode/serial/AddNode 15.99
238 TestMultiNode/serial/MultiNodeLabels 0.1
239 TestMultiNode/serial/ProfileList 0.33
240 TestMultiNode/serial/CopyFile 10.12
241 TestMultiNode/serial/StopNode 2.24
242 TestMultiNode/serial/StartAfterStop 9.47
243 TestMultiNode/serial/RestartKeepsNodes 78.97
244 TestMultiNode/serial/DeleteNode 5.4
245 TestMultiNode/serial/StopMultiNode 24.1
246 TestMultiNode/serial/RestartMultiNode 53.74
247 TestMultiNode/serial/ValidateNameConflict 33.05
252 TestPreload 120.47
254 TestScheduledStopUnix 109.66
257 TestInsufficientStorage 10.76
258 TestRunningBinaryUpgrade 81.11
260 TestKubernetesUpgrade 382.11
261 TestMissingContainerUpgrade 174.44
263 TestPause/serial/Start 63.69
264 TestPause/serial/SecondStartNoReconfiguration 6.7
265 TestPause/serial/Pause 0.95
266 TestPause/serial/VerifyStatus 0.43
267 TestPause/serial/Unpause 0.91
268 TestPause/serial/PauseAgain 1.01
269 TestPause/serial/DeletePaused 2.92
270 TestPause/serial/VerifyDeletedResources 0.46
271 TestStoppedBinaryUpgrade/Setup 2.96
272 TestStoppedBinaryUpgrade/Upgrade 107.3
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
283 TestNoKubernetes/serial/StartWithK8s 38.63
284 TestNoKubernetes/serial/StartWithStopK8s 17.49
288 TestNoKubernetes/serial/Start 6.55
293 TestNetworkPlugins/group/false 5.57
294 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
295 TestNoKubernetes/serial/ProfileList 0.8
299 TestNoKubernetes/serial/Stop 1.27
300 TestNoKubernetes/serial/StartNoArgs 7.14
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
303 TestStartStop/group/old-k8s-version/serial/FirstStart 154.44
305 TestStartStop/group/no-preload/serial/FirstStart 79.15
306 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
308 TestStartStop/group/old-k8s-version/serial/Stop 12.54
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
311 TestStartStop/group/no-preload/serial/DeployApp 9.47
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.72
313 TestStartStop/group/no-preload/serial/Stop 12.22
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
315 TestStartStop/group/no-preload/serial/SecondStart 289.29
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
319 TestStartStop/group/no-preload/serial/Pause 2.99
321 TestStartStop/group/embed-certs/serial/FirstStart 66.04
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.14
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
325 TestStartStop/group/old-k8s-version/serial/Pause 3.74
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.1
328 TestStartStop/group/embed-certs/serial/DeployApp 8.42
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.31
330 TestStartStop/group/embed-certs/serial/Stop 12.11
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.75
333 TestStartStop/group/embed-certs/serial/SecondStart 276.33
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.77
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.49
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 278.28
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
341 TestStartStop/group/embed-certs/serial/Pause 3.53
343 TestStartStop/group/newest-cni/serial/FirstStart 46.86
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.79
348 TestNetworkPlugins/group/auto/Start 65.79
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.88
351 TestStartStop/group/newest-cni/serial/Stop 1.33
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
353 TestStartStop/group/newest-cni/serial/SecondStart 20.08
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
357 TestStartStop/group/newest-cni/serial/Pause 3.29
358 TestNetworkPlugins/group/kindnet/Start 64.65
359 TestNetworkPlugins/group/auto/KubeletFlags 0.41
360 TestNetworkPlugins/group/auto/NetCatPod 10.34
361 TestNetworkPlugins/group/auto/DNS 0.33
362 TestNetworkPlugins/group/auto/Localhost 0.25
363 TestNetworkPlugins/group/auto/HairPin 0.26
364 TestNetworkPlugins/group/calico/Start 78.91
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
367 TestNetworkPlugins/group/kindnet/NetCatPod 11.37
368 TestNetworkPlugins/group/kindnet/DNS 0.24
369 TestNetworkPlugins/group/kindnet/Localhost 0.2
370 TestNetworkPlugins/group/kindnet/HairPin 0.23
371 TestNetworkPlugins/group/custom-flannel/Start 65.83
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.46
374 TestNetworkPlugins/group/calico/NetCatPod 10.36
375 TestNetworkPlugins/group/calico/DNS 0.19
376 TestNetworkPlugins/group/calico/Localhost 0.2
377 TestNetworkPlugins/group/calico/HairPin 0.23
378 TestNetworkPlugins/group/enable-default-cni/Start 92.2
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.42
381 TestNetworkPlugins/group/custom-flannel/DNS 0.25
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
384 TestNetworkPlugins/group/flannel/Start 63.65
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
390 TestNetworkPlugins/group/flannel/ControllerPod 6.01
391 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
392 TestNetworkPlugins/group/flannel/NetCatPod 11.4
393 TestNetworkPlugins/group/flannel/DNS 0.22
394 TestNetworkPlugins/group/flannel/Localhost 0.25
395 TestNetworkPlugins/group/bridge/Start 53.37
396 TestNetworkPlugins/group/flannel/HairPin 0.26
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
398 TestNetworkPlugins/group/bridge/NetCatPod 10.24
399 TestNetworkPlugins/group/bridge/DNS 0.18
400 TestNetworkPlugins/group/bridge/Localhost 0.15
401 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-158763 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-158763 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.651948466s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-158763
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-158763: exit status 85 (83.527777ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-158763 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC |          |
	|         | -p download-only-158763        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 17:35:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 17:35:16.719332  286175 out.go:291] Setting OutFile to fd 1 ...
	I0307 17:35:16.719467  286175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:35:16.719479  286175 out.go:304] Setting ErrFile to fd 2...
	I0307 17:35:16.719484  286175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:35:16.719744  286175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	W0307 17:35:16.719898  286175 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18241-280769/.minikube/config/config.json: open /home/jenkins/minikube-integration/18241-280769/.minikube/config/config.json: no such file or directory
	I0307 17:35:16.720319  286175 out.go:298] Setting JSON to true
	I0307 17:35:16.721256  286175 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4661,"bootTime":1709828256,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 17:35:16.721333  286175 start.go:139] virtualization:  
	I0307 17:35:16.724103  286175 out.go:97] [download-only-158763] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 17:35:16.726315  286175 out.go:169] MINIKUBE_LOCATION=18241
	W0307 17:35:16.724303  286175 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 17:35:16.724340  286175 notify.go:220] Checking for updates...
	I0307 17:35:16.729851  286175 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 17:35:16.731460  286175 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 17:35:16.733332  286175 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	I0307 17:35:16.735009  286175 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0307 17:35:16.738715  286175 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 17:35:16.739093  286175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 17:35:16.759348  286175 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 17:35:16.759443  286175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:35:16.827399  286175 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 17:35:16.817961751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:35:16.827507  286175 docker.go:295] overlay module found
	I0307 17:35:16.829420  286175 out.go:97] Using the docker driver based on user configuration
	I0307 17:35:16.829447  286175 start.go:297] selected driver: docker
	I0307 17:35:16.829453  286175 start.go:901] validating driver "docker" against <nil>
	I0307 17:35:16.829584  286175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:35:16.881504  286175 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 17:35:16.872519051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:35:16.881724  286175 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 17:35:16.881999  286175 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0307 17:35:16.882153  286175 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 17:35:16.884360  286175 out.go:169] Using Docker driver with root privileges
	I0307 17:35:16.886000  286175 cni.go:84] Creating CNI manager for ""
	I0307 17:35:16.886021  286175 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 17:35:16.886032  286175 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 17:35:16.886123  286175 start.go:340] cluster config:
	{Name:download-only-158763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-158763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 17:35:16.888077  286175 out.go:97] Starting "download-only-158763" primary control-plane node in "download-only-158763" cluster
	I0307 17:35:16.888095  286175 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 17:35:16.889827  286175 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 17:35:16.889852  286175 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 17:35:16.889956  286175 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 17:35:16.904231  286175 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 17:35:16.904865  286175 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 17:35:16.904966  286175 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 17:35:16.975511  286175 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0307 17:35:16.975536  286175 cache.go:56] Caching tarball of preloaded images
	I0307 17:35:16.976239  286175 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 17:35:16.978556  286175 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 17:35:16.978583  286175 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0307 17:35:17.107361  286175 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-158763 host does not exist
	  To start a cluster, run: "minikube start -p download-only-158763"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-158763
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (9.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-885743 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-885743 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.632912955s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (9.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-885743
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-885743: exit status 85 (89.054781ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-158763 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC |                     |
	|         | -p download-only-158763        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| delete  | -p download-only-158763        | download-only-158763 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| start   | -o=json --download-only        | download-only-885743 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC |                     |
	|         | -p download-only-885743        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 17:35:26
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 17:35:26.812136  286337 out.go:291] Setting OutFile to fd 1 ...
	I0307 17:35:26.812363  286337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:35:26.812391  286337 out.go:304] Setting ErrFile to fd 2...
	I0307 17:35:26.812409  286337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:35:26.812687  286337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 17:35:26.813110  286337 out.go:298] Setting JSON to true
	I0307 17:35:26.814030  286337 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4671,"bootTime":1709828256,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 17:35:26.814122  286337 start.go:139] virtualization:  
	I0307 17:35:26.816734  286337 out.go:97] [download-only-885743] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 17:35:26.818695  286337 out.go:169] MINIKUBE_LOCATION=18241
	I0307 17:35:26.816934  286337 notify.go:220] Checking for updates...
	I0307 17:35:26.822824  286337 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 17:35:26.824931  286337 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 17:35:26.826477  286337 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	I0307 17:35:26.827961  286337 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0307 17:35:26.831265  286337 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 17:35:26.831544  286337 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 17:35:26.852063  286337 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 17:35:26.852164  286337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:35:26.916849  286337 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 17:35:26.908021553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:35:26.916971  286337 docker.go:295] overlay module found
	I0307 17:35:26.919550  286337 out.go:97] Using the docker driver based on user configuration
	I0307 17:35:26.919580  286337 start.go:297] selected driver: docker
	I0307 17:35:26.919587  286337 start.go:901] validating driver "docker" against <nil>
	I0307 17:35:26.919692  286337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:35:26.973148  286337 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 17:35:26.964064822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:35:26.973327  286337 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 17:35:26.973752  286337 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0307 17:35:26.973914  286337 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 17:35:26.976460  286337 out.go:169] Using Docker driver with root privileges
	I0307 17:35:26.978425  286337 cni.go:84] Creating CNI manager for ""
	I0307 17:35:26.978448  286337 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 17:35:26.978458  286337 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 17:35:26.978539  286337 start.go:340] cluster config:
	{Name:download-only-885743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-885743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 17:35:26.980808  286337 out.go:97] Starting "download-only-885743" primary control-plane node in "download-only-885743" cluster
	I0307 17:35:26.980828  286337 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 17:35:26.983298  286337 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 17:35:26.983327  286337 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 17:35:26.983435  286337 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 17:35:26.997573  286337 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 17:35:26.997703  286337 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 17:35:26.997722  286337 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 17:35:26.997727  286337 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 17:35:26.997735  286337 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 17:35:27.046996  286337 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0307 17:35:27.047027  286337 cache.go:56] Caching tarball of preloaded images
	I0307 17:35:27.047198  286337 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 17:35:27.049493  286337 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0307 17:35:27.049538  286337 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0307 17:35:27.174231  286337 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-885743 host does not exist
	  To start a cluster, run: "minikube start -p download-only-885743"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-885743
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (9.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-928192 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-928192 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.537973823s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (9.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-928192
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-928192: exit status 85 (191.454845ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-158763 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC |                     |
	|         | -p download-only-158763           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| delete  | -p download-only-158763           | download-only-158763 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| start   | -o=json --download-only           | download-only-885743 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC |                     |
	|         | -p download-only-885743           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| delete  | -p download-only-885743           | download-only-885743 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC | 07 Mar 24 17:35 UTC |
	| start   | -o=json --download-only           | download-only-928192 | jenkins | v1.32.0 | 07 Mar 24 17:35 UTC |                     |
	|         | -p download-only-928192           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 17:35:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 17:35:36.885578  286501 out.go:291] Setting OutFile to fd 1 ...
	I0307 17:35:36.885705  286501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:35:36.885716  286501 out.go:304] Setting ErrFile to fd 2...
	I0307 17:35:36.885722  286501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:35:36.885954  286501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 17:35:36.886408  286501 out.go:298] Setting JSON to true
	I0307 17:35:36.887233  286501 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4681,"bootTime":1709828256,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 17:35:36.887303  286501 start.go:139] virtualization:  
	I0307 17:35:36.889437  286501 out.go:97] [download-only-928192] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 17:35:36.891433  286501 out.go:169] MINIKUBE_LOCATION=18241
	I0307 17:35:36.889631  286501 notify.go:220] Checking for updates...
	I0307 17:35:36.895156  286501 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 17:35:36.897640  286501 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 17:35:36.899227  286501 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	I0307 17:35:36.901392  286501 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0307 17:35:36.905063  286501 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 17:35:36.905355  286501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 17:35:36.925258  286501 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 17:35:36.925370  286501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:35:36.992422  286501 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 17:35:36.982928097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:35:36.992538  286501 docker.go:295] overlay module found
	I0307 17:35:36.994911  286501 out.go:97] Using the docker driver based on user configuration
	I0307 17:35:36.994947  286501 start.go:297] selected driver: docker
	I0307 17:35:36.994954  286501 start.go:901] validating driver "docker" against <nil>
	I0307 17:35:36.995071  286501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:35:37.073131  286501 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 17:35:37.063916059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:35:37.073298  286501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 17:35:37.073631  286501 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0307 17:35:37.073809  286501 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 17:35:37.075817  286501 out.go:169] Using Docker driver with root privileges
	I0307 17:35:37.077684  286501 cni.go:84] Creating CNI manager for ""
	I0307 17:35:37.077708  286501 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 17:35:37.077717  286501 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 17:35:37.077815  286501 start.go:340] cluster config:
	{Name:download-only-928192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-928192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0307 17:35:37.079511  286501 out.go:97] Starting "download-only-928192" primary control-plane node in "download-only-928192" cluster
	I0307 17:35:37.079534  286501 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 17:35:37.081086  286501 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 17:35:37.081114  286501 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0307 17:35:37.081218  286501 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 17:35:37.099544  286501 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 17:35:37.099681  286501 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 17:35:37.099701  286501 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 17:35:37.099706  286501 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 17:35:37.099714  286501 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 17:35:37.156975  286501 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0307 17:35:37.157012  286501 cache.go:56] Caching tarball of preloaded images
	I0307 17:35:37.157753  286501 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0307 17:35:37.159821  286501 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0307 17:35:37.159842  286501 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0307 17:35:37.272564  286501 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/18241-280769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-928192 host does not exist
	  To start a cluster, run: "minikube start -p download-only-928192"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-928192
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-722782 --alsologtostderr --binary-mirror http://127.0.0.1:33827 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-722782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-722782
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-493601
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-493601: exit status 85 (85.43986ms)

                                                
                                                
-- stdout --
	* Profile "addons-493601" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-493601"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-493601
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-493601: exit status 85 (89.728338ms)

                                                
                                                
-- stdout --
	* Profile "addons-493601" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-493601"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (139.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-493601 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-493601 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m19.1440546s)
--- PASS: TestAddons/Setup (139.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 45.283117ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qhxmn" [eb907622-518b-4292-acd8-b940ea762e72] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006038697s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bfgsj" [7b075a61-9dc9-453e-a33f-f03666c06bc1] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005226237s
addons_test.go:340: (dbg) Run:  kubectl --context addons-493601 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-493601 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-493601 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.221627106s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 ip
2024/03/07 17:38:21 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-arm64 -p addons-493601 addons disable registry --alsologtostderr -v=1: (1.022708093s)
--- PASS: TestAddons/parallel/Registry (14.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ftm9q" [5ced90d7-41d1-4875-95eb-e4a05773f02a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004163233s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-493601
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-493601: (5.798467341s)
--- PASS: TestAddons/parallel/InspektorGadget (10.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.752005ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-fcvvl" [d8c7b8dc-17a4-491d-a01d-a51c4bbd7aba] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005283093s
addons_test.go:415: (dbg) Run:  kubectl --context addons-493601 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 46.518854ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-493601 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-493601 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ad8cb3ea-762f-491d-ad78-1cfa96aa39dc] Pending
helpers_test.go:344: "task-pv-pod" [ad8cb3ea-762f-491d-ad78-1cfa96aa39dc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ad8cb3ea-762f-491d-ad78-1cfa96aa39dc] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004197421s
addons_test.go:584: (dbg) Run:  kubectl --context addons-493601 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-493601 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-493601 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-493601 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-493601 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-493601 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-493601 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d301bdfd-cf11-468b-82fd-8db5cae9f633] Pending
helpers_test.go:344: "task-pv-pod-restore" [d301bdfd-cf11-468b-82fd-8db5cae9f633] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003839745s
addons_test.go:626: (dbg) Run:  kubectl --context addons-493601 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-493601 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-493601 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-493601 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.81360125s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-493601 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-493601 --alsologtostderr -v=1: (1.54100509s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-zcmcg" [fff78f5c-fb36-48ca-a745-0010a5c77dac] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-zcmcg" [fff78f5c-fb36-48ca-a745-0010a5c77dac] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003594828s
--- PASS: TestAddons/parallel/Headlamp (11.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-zm4n5" [8e2a4610-c1fe-46dc-92e5-78cb2de57529] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003357519s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-493601
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-493601 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-493601 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-493601 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3d5b9599-e71b-4690-afb5-379a12c8d4ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3d5b9599-e71b-4690-afb5-379a12c8d4ac] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3d5b9599-e71b-4690-afb5-379a12c8d4ac] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003817181s
addons_test.go:891: (dbg) Run:  kubectl --context addons-493601 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 ssh "cat /opt/local-path-provisioner/pvc-d88010ac-e556-4434-84f7-887264f6234e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-493601 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-493601 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-493601 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-493601 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.221528524s)
--- PASS: TestAddons/parallel/LocalPath (53.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cwt58" [195aa1f7-53dd-4d65-9e3d-2dea680daefa] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003920774s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-493601
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-2nlg8" [499de3ea-89b3-4232-a534-88b4c53dfcfa] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00349133s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-493601 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-493601 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-493601
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-493601: (11.947895421s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-493601
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-493601
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-493601
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (35.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-849238 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-849238 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.759392883s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-849238 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-849238 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-849238 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-849238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-849238
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-849238: (2.003737932s)
--- PASS: TestCertOptions (35.47s)

                                                
                                    
x
+
TestCertExpiration (225.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-733819 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-733819 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.258000547s)
E0307 18:18:08.249670  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-733819 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-733819 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.035823717s)
helpers_test.go:175: Cleaning up "cert-expiration-733819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-733819
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-733819: (2.30294432s)
--- PASS: TestCertExpiration (225.60s)

                                                
                                    
x
+
TestForceSystemdFlag (43.62s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-773079 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-773079 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.297673392s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-773079 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-773079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-773079
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-773079: (2.033614115s)
--- PASS: TestForceSystemdFlag (43.62s)

                                                
                                    
x
+
TestForceSystemdEnv (37.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-197938 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0307 18:16:11.297948  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-197938 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.148329082s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-197938 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-197938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-197938
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-197938: (2.046934891s)
--- PASS: TestForceSystemdEnv (37.47s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.37s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-956924 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-956924 --driver=docker  --container-runtime=containerd: (29.355063437s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-956924"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-956924": (1.28427605s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Pgoz7tBLlYWI/agent.303335" SSH_AGENT_PID="303336" DOCKER_HOST=ssh://docker@127.0.0.1:33147 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Pgoz7tBLlYWI/agent.303335" SSH_AGENT_PID="303336" DOCKER_HOST=ssh://docker@127.0.0.1:33147 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Pgoz7tBLlYWI/agent.303335" SSH_AGENT_PID="303336" DOCKER_HOST=ssh://docker@127.0.0.1:33147 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.308002155s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Pgoz7tBLlYWI/agent.303335" SSH_AGENT_PID="303336" DOCKER_HOST=ssh://docker@127.0.0.1:33147 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-956924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-956924
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-956924: (1.993344999s)
--- PASS: TestDockerEnvContainerd (45.37s)

                                                
                                    
x
+
TestErrorSpam/setup (31.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-460293 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-460293 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-460293 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-460293 --driver=docker  --container-runtime=containerd: (31.164706983s)
--- PASS: TestErrorSpam/setup (31.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 stop: (1.247911492s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-460293 --log_dir /tmp/nospam-460293 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18241-280769/.minikube/files/etc/test/nested/copy/286169/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-529713 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-529713 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m0.715229807s)
--- PASS: TestFunctional/serial/StartWithProxy (60.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-529713 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-529713 --alsologtostderr -v=8: (5.966662888s)
functional_test.go:659: soft start took 5.978220945s for "functional-529713" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-529713 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 cache add registry.k8s.io/pause:3.1
E0307 17:43:08.250048  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 17:43:08.257786  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 17:43:08.268064  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 17:43:08.288383  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 17:43:08.328552  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 17:43:08.408816  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 17:43:08.569236  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 17:43:08.889845  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 cache add registry.k8s.io/pause:3.1: (1.462087605s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 cache add registry.k8s.io/pause:3.3
E0307 17:43:09.530740  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 cache add registry.k8s.io/pause:3.3: (1.322746203s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 cache add registry.k8s.io/pause:latest
E0307 17:43:10.811909  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 cache add registry.k8s.io/pause:latest: (1.203794952s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-529713 /tmp/TestFunctionalserialCacheCmdcacheadd_local483073422/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 cache add minikube-local-cache-test:functional-529713
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 cache delete minikube-local-cache-test:functional-529713
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-529713
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
E0307 17:43:13.372242  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-529713 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.681084ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 cache reload: (1.171266088s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 kubectl -- --context functional-529713 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-529713 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-529713 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0307 17:43:18.492699  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 17:43:28.732961  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 17:43:49.213174  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-529713 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.512082706s)
functional_test.go:757: restart took 45.512209197s for "functional-529713" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-529713 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 logs: (1.710075433s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 logs --file /tmp/TestFunctionalserialLogsFileCmd1473903591/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 logs --file /tmp/TestFunctionalserialLogsFileCmd1473903591/001/logs.txt: (1.728187493s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-529713 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-529713
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-529713: exit status 115 (616.006715ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31398 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-529713 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-529713 config get cpus: exit status 14 (91.758469ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-529713 config get cpus: exit status 14 (99.954198ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-529713 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-529713 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 317571: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.18s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-529713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-529713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (275.041207ms)

                                                
                                                
-- stdout --
	* [functional-529713] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 17:44:43.750126  317102 out.go:291] Setting OutFile to fd 1 ...
	I0307 17:44:43.750325  317102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:44:43.750367  317102 out.go:304] Setting ErrFile to fd 2...
	I0307 17:44:43.750394  317102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:44:43.750707  317102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 17:44:43.751186  317102 out.go:298] Setting JSON to false
	I0307 17:44:43.752426  317102 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5228,"bootTime":1709828256,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 17:44:43.752556  317102 start.go:139] virtualization:  
	I0307 17:44:43.757370  317102 out.go:177] * [functional-529713] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 17:44:43.760177  317102 notify.go:220] Checking for updates...
	I0307 17:44:43.762409  317102 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 17:44:43.764765  317102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 17:44:43.766723  317102 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 17:44:43.768670  317102 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	I0307 17:44:43.772349  317102 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 17:44:43.774426  317102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 17:44:43.777382  317102 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 17:44:43.778256  317102 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 17:44:43.822059  317102 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 17:44:43.822188  317102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:44:43.892863  317102 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-07 17:44:43.882721496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:44:43.892968  317102 docker.go:295] overlay module found
	I0307 17:44:43.895641  317102 out.go:177] * Using the docker driver based on existing profile
	I0307 17:44:43.897699  317102 start.go:297] selected driver: docker
	I0307 17:44:43.897718  317102 start.go:901] validating driver "docker" against &{Name:functional-529713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-529713 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 17:44:43.897834  317102 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 17:44:43.900645  317102 out.go:177] 
	W0307 17:44:43.902941  317102 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0307 17:44:43.904805  317102 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-529713 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-529713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-529713 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (280.6639ms)

                                                
                                                
-- stdout --
	* [functional-529713] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 17:44:43.430218  317041 out.go:291] Setting OutFile to fd 1 ...
	I0307 17:44:43.430439  317041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:44:43.430466  317041 out.go:304] Setting ErrFile to fd 2...
	I0307 17:44:43.430484  317041 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:44:43.430868  317041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 17:44:43.431310  317041 out.go:298] Setting JSON to false
	I0307 17:44:43.432408  317041 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5227,"bootTime":1709828256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 17:44:43.432516  317041 start.go:139] virtualization:  
	I0307 17:44:43.435519  317041 out.go:177] * [functional-529713] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0307 17:44:43.438446  317041 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 17:44:43.438476  317041 notify.go:220] Checking for updates...
	I0307 17:44:43.442836  317041 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 17:44:43.444929  317041 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 17:44:43.449979  317041 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	I0307 17:44:43.452593  317041 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 17:44:43.455362  317041 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 17:44:43.458314  317041 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 17:44:43.459126  317041 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 17:44:43.500721  317041 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 17:44:43.500834  317041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:44:43.618578  317041 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-07 17:44:43.606466197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:44:43.618694  317041 docker.go:295] overlay module found
	I0307 17:44:43.621196  317041 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0307 17:44:43.623383  317041 start.go:297] selected driver: docker
	I0307 17:44:43.623405  317041 start.go:901] validating driver "docker" against &{Name:functional-529713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-529713 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 17:44:43.623521  317041 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 17:44:43.626237  317041 out.go:177] 
	W0307 17:44:43.628792  317041 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 17:44:43.630798  317041 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-529713 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-529713 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-6jldc" [2614a80b-395e-491b-aa1d-122c87253729] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-6jldc" [2614a80b-395e-491b-aa1d-122c87253729] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004049735s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30374
functional_test.go:1671: http://192.168.49.2:30374: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-6jldc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30374
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9ccded4c-fcb2-4d88-bd71-48a532530c44] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00404078s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-529713 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-529713 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-529713 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-529713 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [87fc5cff-002f-41e8-bbfb-066df8167c2a] Pending
helpers_test.go:344: "sp-pod" [87fc5cff-002f-41e8-bbfb-066df8167c2a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [87fc5cff-002f-41e8-bbfb-066df8167c2a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004079921s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-529713 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-529713 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-529713 delete -f testdata/storage-provisioner/pod.yaml: (1.179959129s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-529713 apply -f testdata/storage-provisioner/pod.yaml
E0307 17:44:30.174030  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [08e4d078-39ad-457a-a103-b52bb7fa5b63] Pending
helpers_test.go:344: "sp-pod" [08e4d078-39ad-457a-a103-b52bb7fa5b63] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [08e4d078-39ad-457a-a103-b52bb7fa5b63] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004771266s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-529713 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh -n functional-529713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 cp functional-529713:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3467419785/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh -n functional-529713 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh -n functional-529713 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/286169/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo cat /etc/test/nested/copy/286169/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/286169.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo cat /etc/ssl/certs/286169.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/286169.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo cat /usr/share/ca-certificates/286169.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2861692.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo cat /etc/ssl/certs/2861692.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2861692.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo cat /usr/share/ca-certificates/2861692.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-529713 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-529713 ssh "sudo systemctl is-active docker": exit status 1 (333.33392ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-529713 ssh "sudo systemctl is-active crio": exit status 1 (343.329562ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-529713 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-529713 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-529713 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-529713 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 314850: os: process already finished
helpers_test.go:502: unable to terminate pid 314695: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-529713 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-529713 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [00736708-da04-443f-9583-ed97eca2b0f1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [00736708-da04-443f-9583-ed97eca2b0f1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.012647061s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-529713 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.150.225 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-529713 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-529713 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-529713 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-fm77g" [24bebd28-3eaf-4d65-98b6-80dcbe477503] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-fm77g" [24bebd28-3eaf-4d65-98b6-80dcbe477503] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004751294s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "359.127226ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "56.950507ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "351.890008ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "76.806299ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdany-port3370359236/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709833479818968145" to /tmp/TestFunctionalparallelMountCmdany-port3370359236/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709833479818968145" to /tmp/TestFunctionalparallelMountCmdany-port3370359236/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709833479818968145" to /tmp/TestFunctionalparallelMountCmdany-port3370359236/001/test-1709833479818968145
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (500.356325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  7 17:44 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  7 17:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  7 17:44 test-1709833479818968145
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh cat /mount-9p/test-1709833479818968145
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-529713 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f132c47b-84c8-4be5-9a8c-a670f0dff208] Pending
helpers_test.go:344: "busybox-mount" [f132c47b-84c8-4be5-9a8c-a670f0dff208] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f132c47b-84c8-4be5-9a8c-a670f0dff208] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f132c47b-84c8-4be5-9a8c-a670f0dff208] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.005279604s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-529713 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdany-port3370359236/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 service list -o json
functional_test.go:1490: Took "587.696109ms" to run "out/minikube-linux-arm64 -p functional-529713 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31910
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31910
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdspecific-port2466127171/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (365.75268ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdspecific-port2466127171/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-529713 ssh "sudo umount -f /mount-9p": exit status 1 (302.175005ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-529713 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdspecific-port2466127171/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632187625/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632187625/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632187625/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T" /mount1: exit status 1 (736.422135ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-529713 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632187625/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632187625/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-529713 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2632187625/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 version --short
--- PASS: TestFunctional/parallel/Version/short (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 version -o=json --components: (1.558838844s)
--- PASS: TestFunctional/parallel/Version/components (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-529713 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-529713
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-529713 image ls --format short --alsologtostderr:
I0307 17:45:09.649765  319567 out.go:291] Setting OutFile to fd 1 ...
I0307 17:45:09.649991  319567 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 17:45:09.650016  319567 out.go:304] Setting ErrFile to fd 2...
I0307 17:45:09.650038  319567 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 17:45:09.650277  319567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
I0307 17:45:09.650937  319567 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 17:45:09.651123  319567 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 17:45:09.651623  319567 cli_runner.go:164] Run: docker container inspect functional-529713 --format={{.State.Status}}
I0307 17:45:09.671657  319567 ssh_runner.go:195] Run: systemctl --version
I0307 17:45:09.671707  319567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529713
I0307 17:45:09.701755  319567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/functional-529713/id_rsa Username:docker}
I0307 17:45:09.790206  319567 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-529713 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| docker.io/library/minikube-local-cache-test | functional-529713  | sha256:752d41 | 1.01kB |
| docker.io/library/nginx                     | alpine             | sha256:be5e6f | 17.6MB |
| docker.io/library/nginx                     | latest             | sha256:760b7c | 67.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-529713 image ls --format table --alsologtostderr:
I0307 17:45:09.992791  319628 out.go:291] Setting OutFile to fd 1 ...
I0307 17:45:09.992914  319628 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 17:45:09.992920  319628 out.go:304] Setting ErrFile to fd 2...
I0307 17:45:09.992925  319628 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 17:45:09.993193  319628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
I0307 17:45:09.993858  319628 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 17:45:09.993984  319628 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 17:45:09.994473  319628 cli_runner.go:164] Run: docker container inspect functional-529713 --format={{.State.Status}}
I0307 17:45:10.017115  319628 ssh_runner.go:195] Run: systemctl --version
I0307 17:45:10.017179  319628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529713
I0307 17:45:10.044379  319628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/functional-529713/id_rsa Username:docker}
I0307 17:45:10.141546  319628 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-529713 image ls --format json --alsologtostderr:
[{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83d
a4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601423"},{"id":"sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":["docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216905"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-prov
isioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTa
gs":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab805
21432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:752d41641ad1eccd741c2fdbce1f072d9b1cba679aa6faa508c524eaea22178e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-529713"],"size":"1007"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-529713 image ls --format json --alsologtostderr:
I0307 17:45:09.948357  319623 out.go:291] Setting OutFile to fd 1 ...
I0307 17:45:09.948576  319623 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 17:45:09.948602  319623 out.go:304] Setting ErrFile to fd 2...
I0307 17:45:09.948620  319623 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 17:45:09.948890  319623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
I0307 17:45:09.949587  319623 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 17:45:09.949778  319623 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 17:45:09.950392  319623 cli_runner.go:164] Run: docker container inspect functional-529713 --format={{.State.Status}}
I0307 17:45:09.977962  319623 ssh_runner.go:195] Run: systemctl --version
I0307 17:45:09.978023  319623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529713
I0307 17:45:10.006517  319623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/functional-529713/id_rsa Username:docker}
I0307 17:45:10.099308  319623 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-529713 image ls --format yaml --alsologtostderr:
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:752d41641ad1eccd741c2fdbce1f072d9b1cba679aa6faa508c524eaea22178e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-529713
size: "1007"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "17601423"
- id: sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests:
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
repoTags:
- docker.io/library/nginx:latest
size: "67216905"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-529713 image ls --format yaml --alsologtostderr:
I0307 17:45:09.656203  319568 out.go:291] Setting OutFile to fd 1 ...
I0307 17:45:09.656354  319568 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 17:45:09.656378  319568 out.go:304] Setting ErrFile to fd 2...
I0307 17:45:09.656397  319568 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 17:45:09.656679  319568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
I0307 17:45:09.657363  319568 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 17:45:09.657575  319568 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 17:45:09.658281  319568 cli_runner.go:164] Run: docker container inspect functional-529713 --format={{.State.Status}}
I0307 17:45:09.690832  319568 ssh_runner.go:195] Run: systemctl --version
I0307 17:45:09.690884  319568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529713
I0307 17:45:09.718654  319568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/functional-529713/id_rsa Username:docker}
I0307 17:45:09.811941  319568 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-529713 ssh pgrep buildkitd: exit status 1 (292.500733ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image build -t localhost/my-image:functional-529713 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-529713 image build -t localhost/my-image:functional-529713 testdata/build --alsologtostderr: (2.141172257s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-529713 image build -t localhost/my-image:functional-529713 testdata/build --alsologtostderr:
I0307 17:45:10.507810  319727 out.go:291] Setting OutFile to fd 1 ...
I0307 17:45:10.508504  319727 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 17:45:10.508520  319727 out.go:304] Setting ErrFile to fd 2...
I0307 17:45:10.508526  319727 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 17:45:10.508810  319727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
I0307 17:45:10.509677  319727 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 17:45:10.511039  319727 config.go:182] Loaded profile config "functional-529713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 17:45:10.511569  319727 cli_runner.go:164] Run: docker container inspect functional-529713 --format={{.State.Status}}
I0307 17:45:10.527766  319727 ssh_runner.go:195] Run: systemctl --version
I0307 17:45:10.527814  319727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-529713
I0307 17:45:10.544662  319727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/functional-529713/id_rsa Username:docker}
I0307 17:45:10.637895  319727 build_images.go:151] Building image from path: /tmp/build.1898415710.tar
I0307 17:45:10.637962  319727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0307 17:45:10.646885  319727 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1898415710.tar
I0307 17:45:10.650364  319727 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1898415710.tar: stat -c "%s %y" /var/lib/minikube/build/build.1898415710.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1898415710.tar': No such file or directory
I0307 17:45:10.650394  319727 ssh_runner.go:362] scp /tmp/build.1898415710.tar --> /var/lib/minikube/build/build.1898415710.tar (3072 bytes)
I0307 17:45:10.675284  319727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1898415710
I0307 17:45:10.684273  319727 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1898415710 -xf /var/lib/minikube/build/build.1898415710.tar
I0307 17:45:10.694020  319727 containerd.go:379] Building image: /var/lib/minikube/build/build.1898415710
I0307 17:45:10.694139  319727 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1898415710 --local dockerfile=/var/lib/minikube/build/build.1898415710 --output type=image,name=localhost/my-image:functional-529713
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:78a8e74266cf98d4a372189ca5102cddcedb39e09041bbe132394bd295b8a676 0.0s done
#8 exporting config sha256:0bedbcf1909c7a10c4b13817f45f5a36b25f71d98b229e8101726e16880ed679 0.0s done
#8 naming to localhost/my-image:functional-529713 done
#8 DONE 0.2s
I0307 17:45:12.552185  319727 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1898415710 --local dockerfile=/var/lib/minikube/build/build.1898415710 --output type=image,name=localhost/my-image:functional-529713: (1.858014294s)
I0307 17:45:12.552256  319727 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1898415710
I0307 17:45:12.567120  319727 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1898415710.tar
I0307 17:45:12.580949  319727 build_images.go:207] Built localhost/my-image:functional-529713 from /tmp/build.1898415710.tar
I0307 17:45:12.580979  319727 build_images.go:123] succeeded building to: functional-529713
I0307 17:45:12.580984  319727 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.305621s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-529713
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image rm gcr.io/google-containers/addon-resizer:functional-529713 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-529713
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-529713 image save --daemon gcr.io/google-containers/addon-resizer:functional-529713 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-529713
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-529713
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-529713
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-529713
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (130.89s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-001180 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0307 17:45:52.095200  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-001180 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m10.033158412s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (130.89s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (34.63s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-001180 -- rollout status deployment/busybox: (31.155179004s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-k7gsd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-qtvwr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-s2ddp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-k7gsd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-qtvwr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-s2ddp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-k7gsd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-qtvwr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-s2ddp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (34.63s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-k7gsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-k7gsd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-qtvwr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-qtvwr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-s2ddp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-001180 -- exec busybox-5b5d89c9d6-s2ddp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (25.24s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-001180 -v=7 --alsologtostderr
E0307 17:48:08.248646  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-001180 -v=7 --alsologtostderr: (24.178467157s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr: (1.062911452s)
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (25.24s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-001180 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (19.7s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-001180 status --output json -v=7 --alsologtostderr: (1.014272137s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp testdata/cp-test.txt ha-001180:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1037055725/001/cp-test_ha-001180.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180:/home/docker/cp-test.txt ha-001180-m02:/home/docker/cp-test_ha-001180_ha-001180-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m02 "sudo cat /home/docker/cp-test_ha-001180_ha-001180-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180:/home/docker/cp-test.txt ha-001180-m03:/home/docker/cp-test_ha-001180_ha-001180-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m03 "sudo cat /home/docker/cp-test_ha-001180_ha-001180-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180:/home/docker/cp-test.txt ha-001180-m04:/home/docker/cp-test_ha-001180_ha-001180-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m04 "sudo cat /home/docker/cp-test_ha-001180_ha-001180-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp testdata/cp-test.txt ha-001180-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1037055725/001/cp-test_ha-001180-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m02 "sudo cat /home/docker/cp-test.txt"
E0307 17:48:35.936198  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m02:/home/docker/cp-test.txt ha-001180:/home/docker/cp-test_ha-001180-m02_ha-001180.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180 "sudo cat /home/docker/cp-test_ha-001180-m02_ha-001180.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m02:/home/docker/cp-test.txt ha-001180-m03:/home/docker/cp-test_ha-001180-m02_ha-001180-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m03 "sudo cat /home/docker/cp-test_ha-001180-m02_ha-001180-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m02:/home/docker/cp-test.txt ha-001180-m04:/home/docker/cp-test_ha-001180-m02_ha-001180-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m04 "sudo cat /home/docker/cp-test_ha-001180-m02_ha-001180-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp testdata/cp-test.txt ha-001180-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1037055725/001/cp-test_ha-001180-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m03:/home/docker/cp-test.txt ha-001180:/home/docker/cp-test_ha-001180-m03_ha-001180.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180 "sudo cat /home/docker/cp-test_ha-001180-m03_ha-001180.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m03:/home/docker/cp-test.txt ha-001180-m02:/home/docker/cp-test_ha-001180-m03_ha-001180-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m02 "sudo cat /home/docker/cp-test_ha-001180-m03_ha-001180-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m03:/home/docker/cp-test.txt ha-001180-m04:/home/docker/cp-test_ha-001180-m03_ha-001180-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m04 "sudo cat /home/docker/cp-test_ha-001180-m03_ha-001180-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp testdata/cp-test.txt ha-001180-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1037055725/001/cp-test_ha-001180-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m04:/home/docker/cp-test.txt ha-001180:/home/docker/cp-test_ha-001180-m04_ha-001180.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180 "sudo cat /home/docker/cp-test_ha-001180-m04_ha-001180.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m04:/home/docker/cp-test.txt ha-001180-m02:/home/docker/cp-test_ha-001180-m04_ha-001180-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m02 "sudo cat /home/docker/cp-test_ha-001180-m04_ha-001180-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 cp ha-001180-m04:/home/docker/cp-test.txt ha-001180-m03:/home/docker/cp-test_ha-001180-m04_ha-001180-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 ssh -n ha-001180-m03 "sudo cat /home/docker/cp-test_ha-001180-m04_ha-001180-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (19.70s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-001180 node stop m02 -v=7 --alsologtostderr: (12.103187415s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr: exit status 7 (741.765165ms)

                                                
                                                
-- stdout --
	ha-001180
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-001180-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-001180-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-001180-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 17:49:01.025478  334968 out.go:291] Setting OutFile to fd 1 ...
	I0307 17:49:01.025686  334968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:49:01.025698  334968 out.go:304] Setting ErrFile to fd 2...
	I0307 17:49:01.025704  334968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:49:01.025983  334968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 17:49:01.026637  334968 out.go:298] Setting JSON to false
	I0307 17:49:01.026716  334968 mustload.go:65] Loading cluster: ha-001180
	I0307 17:49:01.026839  334968 notify.go:220] Checking for updates...
	I0307 17:49:01.027204  334968 config.go:182] Loaded profile config "ha-001180": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 17:49:01.027242  334968 status.go:255] checking status of ha-001180 ...
	I0307 17:49:01.027801  334968 cli_runner.go:164] Run: docker container inspect ha-001180 --format={{.State.Status}}
	I0307 17:49:01.047622  334968 status.go:330] ha-001180 host status = "Running" (err=<nil>)
	I0307 17:49:01.047645  334968 host.go:66] Checking if "ha-001180" exists ...
	I0307 17:49:01.047963  334968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-001180
	I0307 17:49:01.065939  334968 host.go:66] Checking if "ha-001180" exists ...
	I0307 17:49:01.066236  334968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 17:49:01.066286  334968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-001180
	I0307 17:49:01.092015  334968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/ha-001180/id_rsa Username:docker}
	I0307 17:49:01.199050  334968 ssh_runner.go:195] Run: systemctl --version
	I0307 17:49:01.203959  334968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 17:49:01.216874  334968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 17:49:01.277693  334968 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2024-03-07 17:49:01.268025211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 17:49:01.278271  334968 kubeconfig.go:125] found "ha-001180" server: "https://192.168.49.254:8443"
	I0307 17:49:01.278301  334968 api_server.go:166] Checking apiserver status ...
	I0307 17:49:01.278343  334968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 17:49:01.290013  334968 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	I0307 17:49:01.299863  334968 api_server.go:182] apiserver freezer: "6:freezer:/docker/acd44ea2b8994e65617a26f1fe0dd2e2c51adbe81f8f36d353d965234d27ad1d/kubepods/burstable/pod64a41d4d20e3842ba0fef4262263cb3e/b66a7d16289a56efb111753989abdb6267f25dc0a11b7126585caa147b62e4a2"
	I0307 17:49:01.299945  334968 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/acd44ea2b8994e65617a26f1fe0dd2e2c51adbe81f8f36d353d965234d27ad1d/kubepods/burstable/pod64a41d4d20e3842ba0fef4262263cb3e/b66a7d16289a56efb111753989abdb6267f25dc0a11b7126585caa147b62e4a2/freezer.state
	I0307 17:49:01.309143  334968 api_server.go:204] freezer state: "THAWED"
	I0307 17:49:01.309175  334968 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0307 17:49:01.317570  334968 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0307 17:49:01.317601  334968 status.go:422] ha-001180 apiserver status = Running (err=<nil>)
	I0307 17:49:01.317614  334968 status.go:257] ha-001180 status: &{Name:ha-001180 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 17:49:01.317646  334968 status.go:255] checking status of ha-001180-m02 ...
	I0307 17:49:01.317984  334968 cli_runner.go:164] Run: docker container inspect ha-001180-m02 --format={{.State.Status}}
	I0307 17:49:01.333766  334968 status.go:330] ha-001180-m02 host status = "Stopped" (err=<nil>)
	I0307 17:49:01.333789  334968 status.go:343] host is not running, skipping remaining checks
	I0307 17:49:01.333797  334968 status.go:257] ha-001180-m02 status: &{Name:ha-001180-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 17:49:01.333819  334968 status.go:255] checking status of ha-001180-m03 ...
	I0307 17:49:01.334150  334968 cli_runner.go:164] Run: docker container inspect ha-001180-m03 --format={{.State.Status}}
	I0307 17:49:01.351554  334968 status.go:330] ha-001180-m03 host status = "Running" (err=<nil>)
	I0307 17:49:01.351580  334968 host.go:66] Checking if "ha-001180-m03" exists ...
	I0307 17:49:01.351884  334968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-001180-m03
	I0307 17:49:01.367601  334968 host.go:66] Checking if "ha-001180-m03" exists ...
	I0307 17:49:01.367911  334968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 17:49:01.367966  334968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-001180-m03
	I0307 17:49:01.394263  334968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/ha-001180-m03/id_rsa Username:docker}
	I0307 17:49:01.482772  334968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 17:49:01.495972  334968 kubeconfig.go:125] found "ha-001180" server: "https://192.168.49.254:8443"
	I0307 17:49:01.495999  334968 api_server.go:166] Checking apiserver status ...
	I0307 17:49:01.496042  334968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 17:49:01.507095  334968 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1340/cgroup
	I0307 17:49:01.518742  334968 api_server.go:182] apiserver freezer: "6:freezer:/docker/c29048cc33bb88600cb5c3d516ae8c4e71d547b94bac0b60d112b10a8f98c724/kubepods/burstable/podfd0d3a66adb9964eb7091240736e1838/284be52a67d506824339f63f770591c50232fca89c10a5b3a4efe39989542863"
	I0307 17:49:01.518862  334968 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c29048cc33bb88600cb5c3d516ae8c4e71d547b94bac0b60d112b10a8f98c724/kubepods/burstable/podfd0d3a66adb9964eb7091240736e1838/284be52a67d506824339f63f770591c50232fca89c10a5b3a4efe39989542863/freezer.state
	I0307 17:49:01.528222  334968 api_server.go:204] freezer state: "THAWED"
	I0307 17:49:01.528301  334968 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0307 17:49:01.537203  334968 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0307 17:49:01.537230  334968 status.go:422] ha-001180-m03 apiserver status = Running (err=<nil>)
	I0307 17:49:01.537239  334968 status.go:257] ha-001180-m03 status: &{Name:ha-001180-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 17:49:01.537257  334968 status.go:255] checking status of ha-001180-m04 ...
	I0307 17:49:01.537650  334968 cli_runner.go:164] Run: docker container inspect ha-001180-m04 --format={{.State.Status}}
	I0307 17:49:01.556710  334968 status.go:330] ha-001180-m04 host status = "Running" (err=<nil>)
	I0307 17:49:01.556733  334968 host.go:66] Checking if "ha-001180-m04" exists ...
	I0307 17:49:01.557055  334968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-001180-m04
	I0307 17:49:01.575307  334968 host.go:66] Checking if "ha-001180-m04" exists ...
	I0307 17:49:01.575616  334968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 17:49:01.575659  334968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-001180-m04
	I0307 17:49:01.593863  334968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33177 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/ha-001180-m04/id_rsa Username:docker}
	I0307 17:49:01.682536  334968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 17:49:01.694340  334968 status.go:257] ha-001180-m04 status: &{Name:ha-001180-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (18.25s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 node start m02 -v=7 --alsologtostderr
E0307 17:49:11.653435  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:11.659056  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:11.669673  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:11.690091  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:11.730801  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:11.811298  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:11.971489  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:12.292209  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:12.933271  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:14.213849  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:16.774715  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-001180 node start m02 -v=7 --alsologtostderr: (17.154318771s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (18.25s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.8s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.80s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (143.78s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-001180 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-001180 -v=7 --alsologtostderr
E0307 17:49:21.895283  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:32.136263  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 17:49:52.616598  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-001180 -v=7 --alsologtostderr: (37.468247473s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-001180 --wait=true -v=7 --alsologtostderr
E0307 17:50:33.577004  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-001180 --wait=true -v=7 --alsologtostderr: (1m46.097123903s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-001180
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (143.78s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (10.46s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-001180 node delete m03 -v=7 --alsologtostderr: (9.520574077s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
E0307 17:51:55.497377  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (10.46s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (36.04s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-001180 stop -v=7 --alsologtostderr: (35.897215378s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr: exit status 7 (141.624274ms)

                                                
                                                
-- stdout --
	ha-001180
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-001180-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-001180-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 17:52:32.056010  348479 out.go:291] Setting OutFile to fd 1 ...
	I0307 17:52:32.056257  348479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:52:32.056286  348479 out.go:304] Setting ErrFile to fd 2...
	I0307 17:52:32.056305  348479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 17:52:32.056603  348479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 17:52:32.056832  348479 out.go:298] Setting JSON to false
	I0307 17:52:32.056889  348479 mustload.go:65] Loading cluster: ha-001180
	I0307 17:52:32.056965  348479 notify.go:220] Checking for updates...
	I0307 17:52:32.058289  348479 config.go:182] Loaded profile config "ha-001180": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 17:52:32.058316  348479 status.go:255] checking status of ha-001180 ...
	I0307 17:52:32.058930  348479 cli_runner.go:164] Run: docker container inspect ha-001180 --format={{.State.Status}}
	I0307 17:52:32.077247  348479 status.go:330] ha-001180 host status = "Stopped" (err=<nil>)
	I0307 17:52:32.077284  348479 status.go:343] host is not running, skipping remaining checks
	I0307 17:52:32.077293  348479 status.go:257] ha-001180 status: &{Name:ha-001180 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 17:52:32.077318  348479 status.go:255] checking status of ha-001180-m02 ...
	I0307 17:52:32.077663  348479 cli_runner.go:164] Run: docker container inspect ha-001180-m02 --format={{.State.Status}}
	I0307 17:52:32.095295  348479 status.go:330] ha-001180-m02 host status = "Stopped" (err=<nil>)
	I0307 17:52:32.095317  348479 status.go:343] host is not running, skipping remaining checks
	I0307 17:52:32.095324  348479 status.go:257] ha-001180-m02 status: &{Name:ha-001180-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 17:52:32.095347  348479 status.go:255] checking status of ha-001180-m04 ...
	I0307 17:52:32.095723  348479 cli_runner.go:164] Run: docker container inspect ha-001180-m04 --format={{.State.Status}}
	I0307 17:52:32.120299  348479 status.go:330] ha-001180-m04 host status = "Stopped" (err=<nil>)
	I0307 17:52:32.120323  348479 status.go:343] host is not running, skipping remaining checks
	I0307 17:52:32.120330  348479 status.go:257] ha-001180-m04 status: &{Name:ha-001180-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (36.04s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (46.82s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-001180 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0307 17:53:08.249447  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-001180 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (45.772971227s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (46.82s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.6s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.60s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (45.76s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-001180 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-001180 --control-plane -v=7 --alsologtostderr: (44.764181763s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-001180 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (45.76s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-144519 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0307 17:54:39.337672  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-144519 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (58.234868331s)
--- PASS: TestJSONOutput/start/Command (58.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-144519 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-144519 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-144519 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-144519 --output=json --user=testUser: (5.778442938s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-581231 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-581231 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.437539ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"029151dd-8288-4c60-9c51-cbaf98e09514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-581231] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f706120-12ee-4150-bc4f-5e47b383053d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18241"}}
	{"specversion":"1.0","id":"6fb69145-e6f9-495d-b82e-605c95b381f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"51c587cc-7b94-401b-966b-702ad5059da0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig"}}
	{"specversion":"1.0","id":"ccc246aa-d6cd-4e78-9dba-b05df2921ad9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube"}}
	{"specversion":"1.0","id":"b15488a7-52be-4562-ad12-b497a1aaa9c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bba56294-2c63-49c0-b283-48cc556d300c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"37febde9-0640-4aff-a41d-19359b697346","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-581231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-581231
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-767727 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-767727 --network=: (39.138396544s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-767727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-767727
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-767727: (2.11398773s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.27s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-336761 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-336761 --network=bridge: (33.664623397s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-336761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-336761
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-336761: (1.871874581s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.56s)

                                                
                                    
x
+
TestKicExistingNetwork (36.23s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-991140 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-991140 --network=existing-network: (34.146867894s)
helpers_test.go:175: Cleaning up "existing-network-991140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-991140
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-991140: (1.931216875s)
--- PASS: TestKicExistingNetwork (36.23s)

                                                
                                    
x
+
TestKicCustomSubnet (32.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-648807 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-648807 --subnet=192.168.60.0/24: (30.327368651s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-648807 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-648807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-648807
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-648807: (2.060399402s)
--- PASS: TestKicCustomSubnet (32.41s)

                                                
                                    
x
+
TestKicStaticIP (35.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-374133 --static-ip=192.168.200.200
E0307 17:58:08.249249  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-374133 --static-ip=192.168.200.200: (33.44915189s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-374133 ip
helpers_test.go:175: Cleaning up "static-ip-374133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-374133
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-374133: (2.107430526s)
--- PASS: TestKicStaticIP (35.72s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.51s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-871688 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-871688 --driver=docker  --container-runtime=containerd: (31.642518332s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-874355 --driver=docker  --container-runtime=containerd
E0307 17:59:11.654145  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-874355 --driver=docker  --container-runtime=containerd: (32.510189804s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-871688
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-874355
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-874355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-874355
E0307 17:59:31.296803  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-874355: (1.913999129s)
helpers_test.go:175: Cleaning up "first-871688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-871688
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-871688: (2.179559525s)
--- PASS: TestMinikubeProfile (69.51s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-032388 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-032388 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.426523959s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-032388 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-045689 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-045689 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.861733727s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-045689 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-032388 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-032388 --alsologtostderr -v=5: (1.612083586s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-045689 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-045689
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-045689: (1.204610279s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-045689
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-045689: (7.213535022s)
--- PASS: TestMountStart/serial/RestartStopped (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-045689 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-373426 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-373426 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.76631068s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-373426 -- rollout status deployment/busybox: (1.783770186s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-75djt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-75djt -- nslookup kubernetes.io: (5.284888846s)
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-hqr5s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-75djt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-hqr5s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-75djt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-hqr5s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-75djt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-75djt -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-hqr5s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-373426 -- exec busybox-5b5d89c9d6-hqr5s -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-373426 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-373426 -v 3 --alsologtostderr: (15.295311038s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.99s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-373426 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp testdata/cp-test.txt multinode-373426:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp multinode-373426:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3554118079/001/cp-test_multinode-373426.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp multinode-373426:/home/docker/cp-test.txt multinode-373426-m02:/home/docker/cp-test_multinode-373426_multinode-373426-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m02 "sudo cat /home/docker/cp-test_multinode-373426_multinode-373426-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp multinode-373426:/home/docker/cp-test.txt multinode-373426-m03:/home/docker/cp-test_multinode-373426_multinode-373426-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m03 "sudo cat /home/docker/cp-test_multinode-373426_multinode-373426-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp testdata/cp-test.txt multinode-373426-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp multinode-373426-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3554118079/001/cp-test_multinode-373426-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp multinode-373426-m02:/home/docker/cp-test.txt multinode-373426:/home/docker/cp-test_multinode-373426-m02_multinode-373426.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426 "sudo cat /home/docker/cp-test_multinode-373426-m02_multinode-373426.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp multinode-373426-m02:/home/docker/cp-test.txt multinode-373426-m03:/home/docker/cp-test_multinode-373426-m02_multinode-373426-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m03 "sudo cat /home/docker/cp-test_multinode-373426-m02_multinode-373426-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp testdata/cp-test.txt multinode-373426-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp multinode-373426-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3554118079/001/cp-test_multinode-373426-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp multinode-373426-m03:/home/docker/cp-test.txt multinode-373426:/home/docker/cp-test_multinode-373426-m03_multinode-373426.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426 "sudo cat /home/docker/cp-test_multinode-373426-m03_multinode-373426.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 cp multinode-373426-m03:/home/docker/cp-test.txt multinode-373426-m02:/home/docker/cp-test_multinode-373426-m03_multinode-373426-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 ssh -n multinode-373426-m02 "sudo cat /home/docker/cp-test_multinode-373426-m03_multinode-373426-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-373426 node stop m03: (1.224177021s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-373426 status: exit status 7 (514.62127ms)

                                                
                                                
-- stdout --
	multinode-373426
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-373426-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-373426-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-373426 status --alsologtostderr: exit status 7 (499.940843ms)

                                                
                                                
-- stdout --
	multinode-373426
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-373426-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-373426-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:01:57.670175  399922 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:01:57.670402  399922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:01:57.670429  399922 out.go:304] Setting ErrFile to fd 2...
	I0307 18:01:57.670447  399922 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:01:57.670710  399922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 18:01:57.670955  399922 out.go:298] Setting JSON to false
	I0307 18:01:57.671018  399922 mustload.go:65] Loading cluster: multinode-373426
	I0307 18:01:57.671042  399922 notify.go:220] Checking for updates...
	I0307 18:01:57.671506  399922 config.go:182] Loaded profile config "multinode-373426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 18:01:57.671544  399922 status.go:255] checking status of multinode-373426 ...
	I0307 18:01:57.672411  399922 cli_runner.go:164] Run: docker container inspect multinode-373426 --format={{.State.Status}}
	I0307 18:01:57.689597  399922 status.go:330] multinode-373426 host status = "Running" (err=<nil>)
	I0307 18:01:57.689620  399922 host.go:66] Checking if "multinode-373426" exists ...
	I0307 18:01:57.689920  399922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-373426
	I0307 18:01:57.706443  399922 host.go:66] Checking if "multinode-373426" exists ...
	I0307 18:01:57.706774  399922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:01:57.706829  399922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-373426
	I0307 18:01:57.727129  399922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/multinode-373426/id_rsa Username:docker}
	I0307 18:01:57.819316  399922 ssh_runner.go:195] Run: systemctl --version
	I0307 18:01:57.824157  399922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:01:57.836124  399922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:01:57.891060  399922 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-03-07 18:01:57.882054826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:01:57.891802  399922 kubeconfig.go:125] found "multinode-373426" server: "https://192.168.58.2:8443"
	I0307 18:01:57.891837  399922 api_server.go:166] Checking apiserver status ...
	I0307 18:01:57.891884  399922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 18:01:57.903166  399922 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup
	I0307 18:01:57.912728  399922 api_server.go:182] apiserver freezer: "6:freezer:/docker/07e173b334ebc0261bf7d162a045512a076b60268cf51aadbe057847cb571ff3/kubepods/burstable/pod5ba1e5ecb40d9caf7a47c2b171c5fad1/c9b74f92e2fa06d43faa22ee9d71c6c154c7f6fad14d4b319c297233f94f2a17"
	I0307 18:01:57.912806  399922 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/07e173b334ebc0261bf7d162a045512a076b60268cf51aadbe057847cb571ff3/kubepods/burstable/pod5ba1e5ecb40d9caf7a47c2b171c5fad1/c9b74f92e2fa06d43faa22ee9d71c6c154c7f6fad14d4b319c297233f94f2a17/freezer.state
	I0307 18:01:57.921817  399922 api_server.go:204] freezer state: "THAWED"
	I0307 18:01:57.921853  399922 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0307 18:01:57.930228  399922 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0307 18:01:57.930256  399922 status.go:422] multinode-373426 apiserver status = Running (err=<nil>)
	I0307 18:01:57.930277  399922 status.go:257] multinode-373426 status: &{Name:multinode-373426 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:01:57.930299  399922 status.go:255] checking status of multinode-373426-m02 ...
	I0307 18:01:57.930597  399922 cli_runner.go:164] Run: docker container inspect multinode-373426-m02 --format={{.State.Status}}
	I0307 18:01:57.946064  399922 status.go:330] multinode-373426-m02 host status = "Running" (err=<nil>)
	I0307 18:01:57.946087  399922 host.go:66] Checking if "multinode-373426-m02" exists ...
	I0307 18:01:57.946400  399922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-373426-m02
	I0307 18:01:57.962436  399922 host.go:66] Checking if "multinode-373426-m02" exists ...
	I0307 18:01:57.962759  399922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 18:01:57.962805  399922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-373426-m02
	I0307 18:01:57.979444  399922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/18241-280769/.minikube/machines/multinode-373426-m02/id_rsa Username:docker}
	I0307 18:01:58.074965  399922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 18:01:58.086938  399922 status.go:257] multinode-373426-m02 status: &{Name:multinode-373426-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:01:58.086973  399922 status.go:255] checking status of multinode-373426-m03 ...
	I0307 18:01:58.087307  399922 cli_runner.go:164] Run: docker container inspect multinode-373426-m03 --format={{.State.Status}}
	I0307 18:01:58.103411  399922 status.go:330] multinode-373426-m03 host status = "Stopped" (err=<nil>)
	I0307 18:01:58.103438  399922 status.go:343] host is not running, skipping remaining checks
	I0307 18:01:58.103447  399922 status.go:257] multinode-373426-m03 status: &{Name:multinode-373426-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-373426 node start m03 -v=7 --alsologtostderr: (8.72919605s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-373426
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-373426
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-373426: (24.984401449s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-373426 --wait=true -v=8 --alsologtostderr
E0307 18:03:08.249570  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-373426 --wait=true -v=8 --alsologtostderr: (53.835335838s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-373426
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-373426 node delete m03: (4.72583038s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-373426 stop: (23.912845295s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-373426 status: exit status 7 (92.713775ms)

                                                
                                                
-- stdout --
	multinode-373426
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-373426-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-373426 status --alsologtostderr: exit status 7 (95.82812ms)

                                                
                                                
-- stdout --
	multinode-373426
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-373426-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:03:56.003555  407496 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:03:56.003706  407496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:03:56.003716  407496 out.go:304] Setting ErrFile to fd 2...
	I0307 18:03:56.003722  407496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:03:56.003965  407496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 18:03:56.004146  407496 out.go:298] Setting JSON to false
	I0307 18:03:56.004181  407496 mustload.go:65] Loading cluster: multinode-373426
	I0307 18:03:56.004284  407496 notify.go:220] Checking for updates...
	I0307 18:03:56.004584  407496 config.go:182] Loaded profile config "multinode-373426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 18:03:56.004602  407496 status.go:255] checking status of multinode-373426 ...
	I0307 18:03:56.005084  407496 cli_runner.go:164] Run: docker container inspect multinode-373426 --format={{.State.Status}}
	I0307 18:03:56.028388  407496 status.go:330] multinode-373426 host status = "Stopped" (err=<nil>)
	I0307 18:03:56.028436  407496 status.go:343] host is not running, skipping remaining checks
	I0307 18:03:56.028443  407496 status.go:257] multinode-373426 status: &{Name:multinode-373426 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 18:03:56.028472  407496 status.go:255] checking status of multinode-373426-m02 ...
	I0307 18:03:56.028781  407496 cli_runner.go:164] Run: docker container inspect multinode-373426-m02 --format={{.State.Status}}
	I0307 18:03:56.045265  407496 status.go:330] multinode-373426-m02 host status = "Stopped" (err=<nil>)
	I0307 18:03:56.045286  407496 status.go:343] host is not running, skipping remaining checks
	I0307 18:03:56.045294  407496 status.go:257] multinode-373426-m02 status: &{Name:multinode-373426-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-373426 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0307 18:04:11.653874  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-373426 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.02258174s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-373426 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-373426
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-373426-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-373426-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.225757ms)

                                                
                                                
-- stdout --
	* [multinode-373426-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-373426-m02' is duplicated with machine name 'multinode-373426-m02' in profile 'multinode-373426'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-373426-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-373426-m03 --driver=docker  --container-runtime=containerd: (30.2858783s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-373426
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-373426: exit status 80 (310.669632ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-373426 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-373426-m03 already exists in multinode-373426-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-373426-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-373426-m03: (2.297098768s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.05s)

                                                
                                    
x
+
TestPreload (120.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-677696 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0307 18:05:34.697860  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-677696 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m9.278448785s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-677696 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-677696 image pull gcr.io/k8s-minikube/busybox: (1.387069766s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-677696
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-677696: (12.048342593s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-677696 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-677696 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (34.974427286s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-677696 image list
helpers_test.go:175: Cleaning up "test-preload-677696" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-677696
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-677696: (2.423611369s)
--- PASS: TestPreload (120.47s)

                                                
                                    
x
+
TestScheduledStopUnix (109.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-485086 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-485086 --memory=2048 --driver=docker  --container-runtime=containerd: (32.515776294s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-485086 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-485086 -n scheduled-stop-485086
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-485086 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-485086 --cancel-scheduled
E0307 18:08:08.249103  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-485086 -n scheduled-stop-485086
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-485086
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-485086 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-485086
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-485086: exit status 7 (77.807196ms)

                                                
                                                
-- stdout --
	scheduled-stop-485086
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-485086 -n scheduled-stop-485086
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-485086 -n scheduled-stop-485086: exit status 7 (71.3749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-485086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-485086
E0307 18:09:11.654180  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-485086: (5.357314431s)
--- PASS: TestScheduledStopUnix (109.66s)

                                                
                                    
x
+
TestInsufficientStorage (10.76s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-758172 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-758172 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.321015363s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8f2f7e67-d17a-4136-9a59-a7378f403822","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-758172] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"27b8e288-1acd-4e56-8c23-2aa771bec9dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18241"}}
	{"specversion":"1.0","id":"93a22e0c-4850-4741-bd5d-5040b7e1ba8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0822f276-d7a4-4590-b67c-e4c9beed7bb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig"}}
	{"specversion":"1.0","id":"8db1c3f7-74d7-4626-a816-197226f3a037","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube"}}
	{"specversion":"1.0","id":"a117927a-9b6f-4aad-a1a2-da708785339a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"151387d9-fab6-4494-8ec4-12363e9aef6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"477dec3e-ddae-496a-9e42-d8f94b815e1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"df929292-2f66-448b-85df-46dc87c31396","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3f7c25c5-6580-4255-808f-a423e5f794d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8324139-39ab-4689-b183-3dc29a7cd0cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e86b15e0-fd7c-417f-ae2a-4a1ef05bf7a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-758172\" primary control-plane node in \"insufficient-storage-758172\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"dddf703c-b306-48ff-bb86-3d1d17bc8ce6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ec36c07-baba-415a-a069-1a12c0e49a19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3ba2081-3a79-4e09-89aa-62c50d24e421","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-758172 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-758172 --output=json --layout=cluster: exit status 7 (279.044603ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-758172","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-758172","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 18:09:25.530909  425073 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-758172" does not appear in /home/jenkins/minikube-integration/18241-280769/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-758172 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-758172 --output=json --layout=cluster: exit status 7 (271.870409ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-758172","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-758172","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 18:09:25.803181  425123 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-758172" does not appear in /home/jenkins/minikube-integration/18241-280769/kubeconfig
	E0307 18:09:25.812940  425123 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/insufficient-storage-758172/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-758172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-758172
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-758172: (1.885339518s)
--- PASS: TestInsufficientStorage (10.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4098214404 start -p running-upgrade-607655 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4098214404 start -p running-upgrade-607655 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.678329482s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-607655 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-607655 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.948496286s)
helpers_test.go:175: Cleaning up "running-upgrade-607655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-607655
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-607655: (2.574495297s)
--- PASS: TestRunningBinaryUpgrade (81.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (382.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-479589 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-479589 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (55.85002727s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-479589
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-479589: (1.494327985s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-479589 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-479589 status --format={{.Host}}: exit status 7 (173.964332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-479589 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-479589 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m6.310742785s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-479589 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-479589 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-479589 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (113.707313ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-479589] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-479589
	    minikube start -p kubernetes-upgrade-479589 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4795892 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-479589 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-479589 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-479589 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.451462628s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-479589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-479589
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-479589: (2.593313191s)
--- PASS: TestKubernetesUpgrade (382.11s)

                                                
                                    
x
+
TestMissingContainerUpgrade (174.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2721702059 start -p missing-upgrade-880969 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2721702059 start -p missing-upgrade-880969 --memory=2200 --driver=docker  --container-runtime=containerd: (1m33.439691797s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-880969
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-880969: (10.259666965s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-880969
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-880969 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-880969 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.912325184s)
helpers_test.go:175: Cleaning up "missing-upgrade-880969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-880969
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-880969: (2.401319889s)
--- PASS: TestMissingContainerUpgrade (174.44s)

                                                
                                    
x
+
TestPause/serial/Start (63.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-198345 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-198345 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m3.683955636s)
--- PASS: TestPause/serial/Start (63.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-198345 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-198345 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.683974629s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.70s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-198345 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-198345 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-198345 --output=json --layout=cluster: exit status 2 (431.673537ms)

                                                
                                                
-- stdout --
	{"Name":"pause-198345","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-198345","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-198345 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.01s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-198345 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-198345 --alsologtostderr -v=5: (1.008033911s)
--- PASS: TestPause/serial/PauseAgain (1.01s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.92s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-198345 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-198345 --alsologtostderr -v=5: (2.924733021s)
--- PASS: TestPause/serial/DeletePaused (2.92s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-198345
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-198345: exit status 1 (17.398931ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-198345: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3566615251 start -p stopped-upgrade-256577 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0307 18:13:08.256593  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3566615251 start -p stopped-upgrade-256577 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.144363871s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3566615251 -p stopped-upgrade-256577 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3566615251 -p stopped-upgrade-256577 stop: (19.949735164s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-256577 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0307 18:14:11.653822  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-256577 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.203946347s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-256577
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-256577: (1.019888895s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-018191 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-018191 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (101.242746ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-018191] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-018191 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-018191 --driver=docker  --container-runtime=containerd: (38.203111285s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-018191 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-018191 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-018191 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.004733628s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-018191 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-018191 status -o json: exit status 2 (380.664476ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-018191","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-018191
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-018191: (2.103327525s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-018191 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-018191 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.546835865s)
--- PASS: TestNoKubernetes/serial/Start (6.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-394193 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-394193 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (372.653417ms)

                                                
                                                
-- stdout --
	* [false-394193] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18241
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 18:17:12.074598  463111 out.go:291] Setting OutFile to fd 1 ...
	I0307 18:17:12.075038  463111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:17:12.075067  463111 out.go:304] Setting ErrFile to fd 2...
	I0307 18:17:12.075074  463111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 18:17:12.075409  463111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18241-280769/.minikube/bin
	I0307 18:17:12.075888  463111 out.go:298] Setting JSON to false
	I0307 18:17:12.076936  463111 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7176,"bootTime":1709828256,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 18:17:12.077096  463111 start.go:139] virtualization:  
	I0307 18:17:12.080353  463111 out.go:177] * [false-394193] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 18:17:12.083073  463111 out.go:177]   - MINIKUBE_LOCATION=18241
	I0307 18:17:12.083209  463111 notify.go:220] Checking for updates...
	I0307 18:17:12.096448  463111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 18:17:12.099378  463111 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18241-280769/kubeconfig
	I0307 18:17:12.102336  463111 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18241-280769/.minikube
	I0307 18:17:12.105151  463111 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 18:17:12.110889  463111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 18:17:12.114375  463111 config.go:182] Loaded profile config "NoKubernetes-018191": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0307 18:17:12.114564  463111 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 18:17:12.175098  463111 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 18:17:12.175253  463111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 18:17:12.314866  463111 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 18:17:12.25474299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 18:17:12.314982  463111 docker.go:295] overlay module found
	I0307 18:17:12.317257  463111 out.go:177] * Using the docker driver based on user configuration
	I0307 18:17:12.319323  463111 start.go:297] selected driver: docker
	I0307 18:17:12.319343  463111 start.go:901] validating driver "docker" against <nil>
	I0307 18:17:12.319364  463111 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 18:17:12.322258  463111 out.go:177] 
	W0307 18:17:12.324213  463111 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0307 18:17:12.326179  463111 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-394193 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-394193" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-394193

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-394193"

                                                
                                                
----------------------- debugLogs end: false-394193 [took: 4.997418888s] --------------------------------
helpers_test.go:175: Cleaning up "false-394193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-394193
--- PASS: TestNetworkPlugins/group/false (5.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-018191 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-018191 "sudo systemctl is-active --quiet service kubelet": exit status 1 (358.653005ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-018191
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-018191: (1.268644113s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-018191 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-018191 --driver=docker  --container-runtime=containerd: (7.141138109s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-018191 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-018191 "sudo systemctl is-active --quiet service kubelet": exit status 1 (303.199008ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (154.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-997124 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0307 18:19:11.653283  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-997124 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m34.443812205s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (154.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-769637 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-769637 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m19.149783239s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-997124 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ecdc8610-45b4-41cc-b2d6-9e575f9d0dd1] Pending
helpers_test.go:344: "busybox" [ecdc8610-45b4-41cc-b2d6-9e575f9d0dd1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ecdc8610-45b4-41cc-b2d6-9e575f9d0dd1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003737702s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-997124 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-997124 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-997124 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032371666s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-997124 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-997124 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-997124 --alsologtostderr -v=3: (12.537470854s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-997124 -n old-k8s-version-997124
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-997124 -n old-k8s-version-997124: exit status 7 (99.71288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-997124 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-769637 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ca2c86a3-3791-4b4f-9ece-ecd14e6e09ba] Pending
helpers_test.go:344: "busybox" [ca2c86a3-3791-4b4f-9ece-ecd14e6e09ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ca2c86a3-3791-4b4f-9ece-ecd14e6e09ba] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003722978s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-769637 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-769637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-769637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.534451081s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-769637 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-769637 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-769637 --alsologtostderr -v=3: (12.224795089s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-769637 -n no-preload-769637
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-769637 -n no-preload-769637: exit status 7 (94.301281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-769637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-769637 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0307 18:23:08.249283  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 18:24:11.653313  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-769637 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (4m48.920465747s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-769637 -n no-preload-769637
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8qcdk" [9053e8ed-7eaf-40df-9fea-9cbcc01642cf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004330193s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8qcdk" [9053e8ed-7eaf-40df-9fea-9cbcc01642cf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004814345s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-769637 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-769637 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-769637 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-769637 -n no-preload-769637
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-769637 -n no-preload-769637: exit status 2 (314.990505ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-769637 -n no-preload-769637
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-769637 -n no-preload-769637: exit status 2 (327.789942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-769637 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-769637 -n no-preload-769637
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-769637 -n no-preload-769637
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-262201 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-262201 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m6.03975894s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-k57m5" [89d49c31-99a0-4d06-b721-748649b5d8e2] Running
E0307 18:28:08.249659  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004787913s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-k57m5" [89d49c31-99a0-4d06-b721-748649b5d8e2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004815239s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-997124 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-997124 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-997124 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-997124 -n old-k8s-version-997124
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-997124 -n old-k8s-version-997124: exit status 2 (477.48891ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-997124 -n old-k8s-version-997124
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-997124 -n old-k8s-version-997124: exit status 2 (450.65915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-997124 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-997124 -n old-k8s-version-997124
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-997124 -n old-k8s-version-997124
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-171851 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-171851 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m1.096481219s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-262201 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5a5aa33a-fc78-49b6-86f8-f4f6646082ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5a5aa33a-fc78-49b6-86f8-f4f6646082ca] Running
E0307 18:29:11.653575  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004307792s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-262201 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-262201 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-262201 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.175590663s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-262201 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-262201 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-262201 --alsologtostderr -v=3: (12.112374079s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-262201 -n embed-certs-262201
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-262201 -n embed-certs-262201: exit status 7 (140.131716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-262201 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-171851 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b3eb3cab-c3ac-40e1-9c73-bbaa313b2b5b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b3eb3cab-c3ac-40e1-9c73-bbaa313b2b5b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.007797568s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-171851 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (276.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-262201 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-262201 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m35.942759929s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-262201 -n embed-certs-262201
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (276.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-171851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-171851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.628388972s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-171851 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-171851 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-171851 --alsologtostderr -v=3: (12.488981075s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-171851 -n default-k8s-diff-port-171851
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-171851 -n default-k8s-diff-port-171851: exit status 7 (76.652865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-171851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (278.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-171851 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0307 18:31:22.871205  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:22.876498  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:22.886815  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:22.906993  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:22.947241  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:23.027757  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:23.188340  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:23.509084  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:24.149974  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:25.430324  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:27.990680  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:33.111323  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:31:43.351621  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:32:03.831836  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:32:27.433692  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:27.438975  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:27.449299  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:27.469562  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:27.510379  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:27.590716  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:27.751005  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:28.071957  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:28.712593  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:29.993053  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:32.553462  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:37.674608  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:44.792197  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
E0307 18:32:47.915151  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:32:51.299142  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 18:33:08.248627  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/addons-493601/client.crt: no such file or directory
E0307 18:33:08.396012  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
E0307 18:33:49.356294  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-171851 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m37.840813879s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-171851 -n default-k8s-diff-port-171851
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (278.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-44ncj" [d87f9add-064a-47bc-b38a-53dfea509a69] Running
E0307 18:34:06.712644  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004233111s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-44ncj" [d87f9add-064a-47bc-b38a-53dfea509a69] Running
E0307 18:34:11.653359  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004109808s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-262201 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-262201 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-262201 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-262201 -n embed-certs-262201
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-262201 -n embed-certs-262201: exit status 2 (332.630949ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-262201 -n embed-certs-262201
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-262201 -n embed-certs-262201: exit status 2 (326.748138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-262201 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-262201 -n embed-certs-262201
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-262201 -n embed-certs-262201
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-119682 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-119682 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (46.86060254s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wzwft" [e2c56ae9-5aac-4091-8d03-d1ce06fda82c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004015794s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wzwft" [e2c56ae9-5aac-4091-8d03-d1ce06fda82c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004066304s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-171851 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-171851 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-171851 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-171851 -n default-k8s-diff-port-171851
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-171851 -n default-k8s-diff-port-171851: exit status 2 (370.537481ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-171851 -n default-k8s-diff-port-171851
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-171851 -n default-k8s-diff-port-171851: exit status 2 (417.075479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-171851 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-171851 -n default-k8s-diff-port-171851
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-171851 -n default-k8s-diff-port-171851
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (65.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m5.788071759s)
--- PASS: TestNetworkPlugins/group/auto/Start (65.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-119682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-119682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.876610712s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-119682 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-119682 --alsologtostderr -v=3: (1.33086529s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-119682 -n newest-cni-119682
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-119682 -n newest-cni-119682: exit status 7 (97.155985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-119682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-119682 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0307 18:35:11.276645  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-119682 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (19.654209982s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-119682 -n newest-cni-119682
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-119682 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-119682 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-119682 -n newest-cni-119682
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-119682 -n newest-cni-119682: exit status 2 (364.240769ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-119682 -n newest-cni-119682
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-119682 -n newest-cni-119682: exit status 2 (323.397914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-119682 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-119682 -n newest-cni-119682
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-119682 -n newest-cni-119682
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.29s)
E0307 18:40:57.155367  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/auto-394193/client.crt: no such file or directory
E0307 18:41:02.275998  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/auto-394193/client.crt: no such file or directory
E0307 18:41:12.517125  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/auto-394193/client.crt: no such file or directory
E0307 18:41:22.871464  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m4.650722926s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-394193 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-394193 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b7gjj" [396506a5-1844-47fe-a501-e3b2a1b99fe6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b7gjj" [396506a5-1844-47fe-a501-e3b2a1b99fe6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004255993s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-394193 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m18.909534954s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2bj4v" [164f9bc6-74e7-47f0-bf40-c4476d79f271] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004384109s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-394193 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-394193 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cvjtd" [67e4b722-a432-4e31-b275-0867ab578bf5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0307 18:36:50.553009  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/old-k8s-version-997124/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-cvjtd" [67e4b722-a432-4e31-b275-0867ab578bf5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003939234s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-394193 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0307 18:37:27.433949  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m5.829975615s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9vjj7" [3e88f3c1-3399-4b74-a65b-6106869938fd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007047922s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-394193 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-394193 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-94dv2" [ab85ceb7-a406-4125-a3f5-29a13383a781] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0307 18:37:55.117561  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/no-preload-769637/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-94dv2" [ab85ceb7-a406-4125-a3f5-29a13383a781] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003257014s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-394193 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m32.196168636s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-394193 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-394193 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s2klf" [0d4798a6-f47f-4b1b-845b-099103936e49] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s2klf" [0d4798a6-f47f-4b1b-845b-099103936e49] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005565144s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-394193 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0307 18:39:11.653770  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/functional-529713/client.crt: no such file or directory
E0307 18:39:25.674868  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:25.680099  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:25.690340  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:25.710584  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:25.750839  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:25.831162  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:25.991694  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:26.312262  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:26.952725  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:28.233089  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:30.793308  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:35.914380  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
E0307 18:39:46.154808  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.650274237s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-394193 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-394193 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x4n7f" [d7bf2590-8d0a-4363-a913-b5125a12f537] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x4n7f" [d7bf2590-8d0a-4363-a913-b5125a12f537] Running
E0307 18:40:06.635843  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/default-k8s-diff-port-171851/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.006509377s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-394193 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gfl8k" [43598f62-fc94-4f34-b589-7bc831024909] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005137624s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-394193 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-394193 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rn5x7" [145babe0-9d01-4f5b-b0fb-1fabc159cac3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rn5x7" [145babe0-9d01-4f5b-b0fb-1fabc159cac3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003713352s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-394193 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-394193 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (53.373880709s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-394193 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-394193 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h2klx" [47c52acb-bf50-4430-a485-76906a17b698] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h2klx" [47c52acb-bf50-4430-a485-76906a17b698] Running
E0307 18:41:32.997657  286169 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/auto-394193/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003816588s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-394193 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-394193 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-699370 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-699370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-699370
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-951805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-951805
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-394193 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-394193" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18241-280769/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Mar 2024 18:16:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-018191
contexts:
- context:
cluster: NoKubernetes-018191
extensions:
- extension:
last-update: Thu, 07 Mar 2024 18:16:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-018191
name: NoKubernetes-018191
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-018191
user:
client-certificate: /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/NoKubernetes-018191/client.crt
client-key: /home/jenkins/minikube-integration/18241-280769/.minikube/profiles/NoKubernetes-018191/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-394193

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-394193"

                                                
                                                
----------------------- debugLogs end: kubenet-394193 [took: 4.648771398s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-394193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-394193
--- SKIP: TestNetworkPlugins/group/kubenet (4.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-394193 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-394193" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-394193

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-394193" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-394193"

                                                
                                                
----------------------- debugLogs end: cilium-394193 [took: 4.783619601s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-394193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-394193
--- SKIP: TestNetworkPlugins/group/cilium (4.94s)

                                                
                                    
Copied to clipboard